[
  {
    "path": "Editing-in-Diffusion.md",
    "content": "# Image Editing In Diffusion \n\n## text-to-image\n[arxiv 2025.03] Lumina-Image 2.0: A Unified and Efficient Image Generative Framework  [[PDF](https://arxiv.org/abs/2503.21758)]\n\n[arxiv 2025.04] Seedream 3.0 Technical Report  [[PDF](https://arxiv.org/abs/2504.11346),[Page](https://team.doubao.com/zh/tech/seedream3_0)] \n\n[arxiv 2025.09] SD3.5-Flash: Distribution-Guided Distillation of Generative Flows [[PDF](https://arxiv.org/pdf/2509.21318)]\n\n[arxiv 2025.09]  Seedream 4.0: Toward Next-generation Multimodal Image Generation [[PDF](https://arxiv.org/abs/2509.20427), [Page](https://seed.bytedance.com/zh/seedream4_0)]\n\n[arxiv 2025.12] Ovis-Image Technical Report  [[PDF](https://arxiv.org/abs/2511.22982),[Page](https://github.com/AIDC-AI/Ovis-Image)] ![Code](https://img.shields.io/github/stars/AIDC-AI/Ovis-Image?style=social&label=Star)\n\n[arxiv 2025.12] Z-Image: An Efficient Image Generation Foundation Model with Single-Stream Diffusion Transformer  [[PDF](https://arxiv.org/abs/2511.22699),[Page](https://tongyi-mai.github.io/Z-Image-blog/)] ![Code](https://img.shields.io/github/stars/Tongyi-MAI/Z-Image?style=social&label=Star)\n\n[arxiv 2026.01] GLM-Image  [[PDF](https://z.ai/blog/glm-image),[Page](https://github.com/zai-org/GLM-Image)] ![Code](https://img.shields.io/github/stars/zai-org/GLM-Image?style=social&label=Star)\n\n[arxiv 2026.02] FireRed-Image-Edit-1.0 Techinical Report  [[PDF](https://arxiv.org/abs/2602.13344),[Page](https://github.com/FireRedTeam/FireRed-Image-Edit)] ![Code](https://img.shields.io/github/stars/FireRedTeam/FireRed-Image-Edit?style=social&label=Star)\n\n[arxiv 2026.03]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)\n\n## pixel \n[arxiv 2025.11]  DeCo: Frequency-Decoupled Pixel Diffusion for End-to-End Image Generation [[PDF](https://arxiv.org/pdf/2511.19365),[Page](https://zehong-ma.github.io/DeCo/)] ![Code](https://img.shields.io/github/stars/Zehong-Ma/DeCo?style=social&label=Star)\n\n[arxiv 2025.11] Back to Basics: Let Denoising Generative Models Denoise  [[PDF](https://arxiv.org/abs/2511.13720),[Page](https://github.com/LTH14/JiT)] ![Code](https://img.shields.io/github/stars/LTH14/JiT?style=social&label=Star)\n\n[arxiv 2025.11]  DiP: Taming Diffusion Models in Pixel Space [[PDF](https://arxiv.org/pdf/2511.18822)]\n\n[arxiv 2025.11] PixelDiT: Pixel Diffusion Transformers for Image Generation  [[PDF](https://arxiv.org/pdf/2511.20645)]\n\n[arxiv 2026.03]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)\n\n\n## Editing \n\n*[ICLR2022; Stanford & CMU] ***SDEdit:*** Guided Image Synthesis and Editing with Stochastic Differential Equations [[PDF](https://arxiv.org/pdf/2108.01073.pdf), [Page](https://sde-image-editing.github.io/)]\n\n*[arxiv 22.08; meta] ***Prompt-to-Prompt*** Image Editing with Cross Attention Control [[PDF](https://arxiv.org/abs/2208.01626) ]\n\n[arxiv 22.08; Scale AI] ***Direct Inversion***: Optimization-Free Text-Driven Real Image Editing with Diffusion Models [[PDF](https://arxiv.org/pdf/2211.07825)]\n\n[arxiv 22.11; UC Berkeley] ***InstructPix2Pix***: Learning to Follow Image Editing Instructions [[PDF](https://arxiv.org/pdf/2211.09800.pdf), [Page](https://www.timothybrooks.com/instruct-pix2pix)]\n\n[arxiv 2022; Nvidia ] eDiffi: Text-to-Image Diffusion Models with an Ensemble of Expert Denoisers \\[[PDF](https://arxiv.org/pdf/2211.01324.pdf), Code\\]\n\n[arxiv 2022; Goolge ] Imagic: Text-Based Real Image Editing with Diffusion Models \\[[PDF](https://arxiv.org/pdf/2210.09276.pdf), Code\\]\n\n[arxiv 2022] ***DiffEdit***: Diffusion-based semantic image editing with mask guidance [[Paper](https://openreview.net/forum?id=3lge0p5o-M-)]\n\n[arxiv 2022] ***DiffIT***: Diffusion-based Image Translation Using Disentangled Style and Content Repesentation [[Paper]](https://openreview.net/pdf?id=Nayau9fwXU)  \n\n[arxiv 2022] Dual Diffusion Implicit Bridges for Image-to-image Translation [[Paper]](https://openreview.net/pdf?id=5HLoTvVGDe)  \n*[ICLR 23, Google] Classifier-free Diffusion Guidance [[Paper]](https://arxiv.org/pdf/2207.12598.pdf)\n\n[arxiv 2022] ***EDICT***: Exact Diffusion Inversion via Coupled Transformations \\[[PDF](https://arxiv.org/abs/2211.12446)\\]  \n\n[arxiv 22.11] ***Paint by Example***: Exemplar-based Image Editing with Diffusion Models [[PDF]](https://arxiv.org/abs/2211.13227)  \n\n[arxiv 2022.10; ByteDance]MagicMix: Semantic Mixing with Diffusion Models [[PDF]](https://arxiv.org/abs/2210.16056)  \n\n[arxiv 2022.12; Microsoft]X-Paste: Revisit Copy-Paste at Scale with CLIP and StableDiffusion\\[[PDF](https://arxiv.org/pdf/2212.03863.pdf)\\]\n\n[arxi 2022.12]SINE: SINgle Image Editing with Text-to-Image Diffusion Models \\[[PDF](https://arxiv.org/pdf/2212.04489.pdf)\\]\n\n[arxiv 2022.12]Uncovering the Disentanglement Capability in Text-to-Image Diffusion Models[[PDF](https://arxiv.org/pdf/2212.08698.pdf)]\n\n[arxiv 2022.12]Optimizing Prompts for Text-to-Image Generation [[PDF](https://arxiv.org/pdf/2212.09611.pdf)]\n\n[arxiv 2023.01]Guiding Text-to-Image Diffusion Model Towards Grounded Generation [[PDF](https://arxiv.org/pdf/2301.05221.pdf), [Page](https://lipurple.github.io/Grounded_Diffusion/)]\n\n[arxiv 2023.02, Adobe]Controlled and Conditional Text to Image Generation with Diffusion Prior [[PDF](https://arxiv.org/abs/2302.11710)]\n\n[arxiv 2023.02]Learning Input-agnostic Manipulation Directions in StyleGAN with Text Guidance [[PDF](https://arxiv.org/abs/2302.13331)]\n\n[arxiv 2023.02]Towards Enhanced Controllability of Diffusion Models[[PDF](https://arxiv.org/pdf/2302.14368.pdf)]\n\n[arxiv 2023.03]X&Fuse: Fusing Visual Information in Text-to-Image Generation [[PDF](https://arxiv.org/abs/2303.01000)]\n\n[arxiv 2023.03]Lformer: Text-to-Image Generation with L-shape Block Parallel Decoding [[PDF](https://arxiv.org/abs/2303.03800)]\n\n[arxiv 2023.03]CoralStyleCLIP: Co-optimized Region and Layer Selection for Image Editing [[PDF](https://arxiv.org/abs/2303.05031)]\n\n[arxiv 2023.03]Erasing Concepts from Diffusion Models [[PDF](https://arxiv.org/abs/2303.07345), [Code](https://github.com/rohitgandikota/erasing)]\n\n[arxiv 2023.03]Editing Implicit Assumptions in Text-to-Image Diffusion Models [[PDF](https://arxiv.org/abs/2303.08084), [Page](https://time-diffusion.github.io/)]\n\n[arxiv 2023.03]Localizing Object-level Shape Variations with Text-to-Image Diffusion Models [[PDF](https://arxiv.org/abs/2303.11306), [Page](https://orpatashnik.github.io/local-prompt-mixing/)]\n\n[arxiv 2023.03]SVDiff: Compact Parameter Space for Diffusion Fine-Tuning[[PDF](https://arxiv.org/abs/2303.11305)]\n\n[arxiv 2023.03]Ablating Concepts in Text-to-Image Diffusion Models [[PDF](https://arxiv.org/abs/2303.13516), [Page](https://www.cs.cmu.edu/~concept-ablation/)]\n\n[arxiv 2023.03]ReVersion : Diffusion-Based  Relation Inversion from Images[[PDF](https://arxiv.org/abs/2303.13495), [Page](https://ziqihuangg.github.io/projects/reversion.html)]\n\n[arxiv 2023.03]MagicFusion: Boosting Text-to-Image Generation Performance by Fusing Diffusion Models [[PDF](https://arxiv.org/abs/2303.13126)]\n\n[arxiv 2023.04]One-shot Unsupervised Domain Adaptation with Personalized Diffusion Models [[PDF](https://arxiv.org/abs/2303.18080)]\n\n[arxiv 2023.04]3D-aware Image Generation using 2D Diffusion Models [[PDF](https://arxiv.org/abs/2303.17905)]\n\n[arxiv 2023.04]Inst-Inpaint: Instructing to Remove Objects with Diffusion Models[[PDF](https://arxiv.org/abs/2304.03246)]\n\n[arxiv 2023.04]Harnessing the Spatial-Temporal Attention of Diffusion Models for High-Fidelity Text-to-Image Synthesis [[PDF](https://t.co/GJNrYFA8wS)]\n\n->[arxiv 2023.04]Expressive Text-to-Image Generation with Rich Text [[PDF](https://arxiv.org/abs/2304.06720), [Page](https://rich-text-to-image.github.io/)]\n\n[arxiv 2023.04]DiffusionRig: Learning Personalized Priors for Facial Appearance Editing [[PDF](https://arxiv.org/abs/2304.06711)]\n\n[arxiv 2023.04]An Edit Friendly DDPM Noise Space: Inversion and Manipulations [[PDF](https://arxiv.org/abs/2304.06140)]\n\n[arxiv 2023.04]Gradient-Free Textual Inversion [[PDF](https://arxiv.org/abs/2304.05818)]\n\n[arxiv 2023.04]Improving Diffusion Models for Scene Text Editing with Dual Encoders [[PDF](https://arxiv.org/pdf/2304.05568.pdf)]\n\n[arxiv 2023.04]Delta Denoising Score [[PDF](https://arxiv.org/abs/2304.07090), [Page](https://delta-denoising-score.github.io/)]\n\n[arxiv 2023.04]MasaCtrl: Tuning-Free Mutual Self-Attention Control for Consistent Image Synthesis and Editing [[PDF](https://arxiv.org/abs/2304.08465), [Page](https://ljzycmd.github.io/projects/MasaCtrl)]\n\n[arxiv 2023.04]Edit Everything: A Text-Guided Generative System for Images Editing [[PDF](https://arxiv.org/pdf/2304.14006.pdf)]\n\n[arxiv 2023.05]In-Context Learning Unlocked for Diffusion Models [[PDF](https://arxiv.org/pdf/2305.01115.pdf)]\n\n[arxiv 2023.05]ReGeneration Learning of Diffusion Models with Rich Prompts for Zero-Shot Image Translation [[PDF](https://arxiv.org/abs/2305.04651)]\n\n[arxiv 2023.05]RAPHAEL: Text-to-Image Generation via Large Mixture of Diffusion Paths [[PDF](https://arxiv.org/abs/2305.18295)]\n\n[arxiv 2023.05]Controllable Text-to-Image Generation with GPT-4 [[PDF](https://arxiv.org/abs/2305.18583)]\n\n[arxiv 2023.06]Diffusion Self-Guidance for Controllable Image Generation [[PDF](https://arxiv.org/abs/2306.00986), [Page](https://dave.ml/selfguidance/)]\n\n[arxiv 2023.06]SyncDiffusion: Coherent Montage via Synchronized Joint Diffusions [[PDF](https://arxiv.org/abs/2306.05178), [Page](https://syncdiffusion.github.io/)]\n\n[arxiv 2023.06]MagicBrush: A Manually Annotated Dataset for Instruction-Guided Image Editing[[PDF](https://arxiv.org/abs/2306.10012), [Page](https://osu-nlp-group.github.io/MagicBrush/)]\n\n[arxiv 2023.06 ]Diffusion in Diffusion: Cyclic One-Way Diffusion for Text-Vision-Conditioned Generation [[PDf](https://arxiv.org/abs/2306.08247)]\n\n->[arxiv 2023.06]Paste, Inpaint and Harmonize via Denoising: Subject-Driven Image Editing with Pre-Trained Diffusion Model [[PDF](https://arxiv.org/abs/2306.07596)]\n\n[arxiv 2023.06]Controlling Text-to-Image Diffusion by Orthogonal Finetuning [[PDF](https://arxiv.org/abs/2306.07280)]\n\n[arxiv 2023.06]Localized Text-to-Image Generation for Free via Cross Attention Control[[PDF](https://arxiv.org/abs/2306.14636)]\n\n[arxiv 2023.06]Filtered-Guided Diffusion: Fast Filter Guidance for Black-Box Diffusion Models [[PDF](https://arxiv.org/abs/2306.17141)]\n\n[arxiv 2023.06]PFB-Diff: Progressive Feature Blending Diffusion for Text-driven Image Editing [[PDF](https://arxiv.org/abs/2306.16894)]\n\n[arxiv 2023.06]DragDiffusion: Harnessing Diffusion Models for Interactive Point-based Image Editing[[PDF](https://yujun-shi.github.io/projects/dragdiffusion.html)]\n\n[arxiv 2023.06]Diffusion Self-Guidance for Controllable Image Generation [[PDF](https://dave.ml/selfguidance/)]\n\n[arxiv 2023.07]Counting Guidance for High Fidelity Text-to-Image Synthesis [[PDF](https://arxiv.org/pdf/2306.17567.pdf)]\n\n[arxiv 2023.07]LEDITS: Real Image Editing with DDPM Inversion and Semantic Guidance [[PDF](https://arxiv.org/abs//2307.00522)]\n\n[arxiv 2023.07]DragonDiffusion: Enabling Drag-style Manipulation on Diffusion Models [[PDF](https://arxiv.org/pdf/2307.02421.pdf)]\n\n[arxiv 2023.07]Taming Encoder for Zero Fine-tuning Image Customization with Text-to-Image Diffusion Models [[PDF](https://arxiv.org/abs/2304.02642)]\n\n[arxiv 2023.07]Not All Steps are Created Equal: Selective Diffusion Distillation for Image Manipulation [[PDF](https://arxiv.org/abs/2307.08448)]\n\n[arxiv 2023.07]FABRIC: Personalizing Diffusion Models with Iterative Feedback [[PDF](https://arxiv.org/pdf/2307.10159.pdf)]\n\n[arxiv 2023.07]Understanding the Latent Space of Diffusion Models through the Lens of Riemannian Geometry [[PDF](https://arxiv.org/pdf/2307.12868.pdf)]\n\n[arxiv 2023.07]Interpolating between Images with Diffusion Models [[PDF](https://arxiv.org/abs/2307.12560)]\n\n[arxiv 2023.07]TF-ICON: Diffusion-Based Training-Free Cross-Domain Image Composition [[PDF](https://arxiv.org/abs/2307.12493)]\n\n[arxiv 2023.08]ImageBrush: Learning Visual In-Context Instructions for Exemplar-Based Image Manipulation [[PDF](https://arxiv.org/abs/2308.00906)]\n\n[arxiv 2023.09]Iterative Multi-granular Image Editing using Diffusion Models [[PDF](https://arxiv.org/pdf/2309.00613.pdf)]\n\n[arxiv 2023.09]InstructDiffusion: A Generalist Modeling Interface for Vision Tasks [[PDF](https://arxiv.org/abs/2309.03895)]\n\n[arxiv 2023.09]InstaFlow: One Step is Enough for High-Quality Diffusion-Based Text-to-Image Generation [[PDF](https://arxiv.org/abs/2309.06380),[Page](https://github.com/gnobitab/InstaFlow)]\n\n[arxiv 2023.09]ITI-GEN: Inclusive Text-to-Image Generation [[PDF](https://arxiv.org/pdf/2309.05569.pdf), [Page](https://czhang0528.github.io/iti-gen)]\n\n[arxiv 2023.09]MaskDiffusion: Boosting Text-to-Image Consistency with Conditional Mask [[PDF](https://arxiv.org/pdf/2309.04399.pdf)]\n\n[arxiv 2023.09]FreeU : Free Lunch in Diffusion U-Net [[PDF](https://arxiv.org/abs/2309.11497),[Page](https://chenyangsi.top/FreeU/)]\n\n[arxiv 2023.09]Dream the Impossible: Outlier Imagination with Diffusion Models [[PDF](https://arxiv.org/abs/2309.13415)]\n\n[arxiv 2023.09]Dynamic Prompt Learning: Addressing Cross-Attention Leakage for Text-Based Image Editing [[PDF](https://arxiv.org/abs/2309.15664), [Page](https://github.com/wangkai930418/DPL)]\n\n[arxiv 2023.09]RealFill: Reference-Driven Generation for Authentic Image Completion [[PDF](https://arxiv.org/pdf/2309.16668.pdf), [Page](https://realfill.github.io/)]\n\n[arxiv 2023.10]Aligning Text-to-Image Diffusion Models with Reward Backpropagation [[PDF](https://arxiv.org/abs/2310.03739),[Page](https://align-prop.github.io/)]\n\n[arxiv 2023.10]InstructCV: Instruction-Tuned Text-to-Image Diffusion Models as Vision Generalists [[PDF](https://arxiv.org/abs/2310.00390)]\n\n[arxiv 2023.10]Steered Diffusion: A Generalized Framework for Plug-and-Play Conditional Image Synthesis [[PDF](https://arxiv.org/abs/2310.00224)]\n\n[arxiv 2023.10]Guiding Instruction-based Image Editing via Multimodal Large Language Models [[PDF](https://arxiv.org/abs/2309.17102),[Page](https://mllm-ie.github.io/)]\n\n[arxiv 2023.10]Kandinsky: an Improved Text-to-Image Synthesis with Image Prior and Latent Diffusion [[PDF](https://arxiv.org/abs/2310.03502)]\n\n[arxiv 2023.10]JOINTNET: EXTENDING TEXT-TO-IMAGE DIFFUSION FOR DENSE DISTRIBUTION MODELING [[PDF](https://arxiv.org/pdf/2310.06347.pdf)]\n\n[arxiv 2023.10]Uni-paint: A Unified Framework for Multimodal Image Inpainting with Pretrained Diffusion Model [[PDF](https://arxiv.org/abs/2310.07222)]\n\n[arxiv 2023.10]Unsupervised Discovery of Interpretable Directions in h-space of Pre-trained Diffusion Models [[PDF](https://arxiv.org/abs/2310.09912)]\n\n[arxiv 2023.10]Iterative Self-Refinement with GPT-4V(ision) for Automatic Image Design and Generation [[PDf](https://arxiv.org/abs/2310.08541),[Page](https://idea2img.github.io/)]\n\n[arxiv 2023.10]SingleInsert: Inserting New Concepts from a Single Image into Text-to-Image Models for Flexible Editing [[PDF](https://arxiv.org/abs/2310.08094),[Page](https://jarrentwu1031.github.io/SingleInsert-web/)]\n\n[arxiv 2023.10]CycleNet: Rethinking Cycle Consistency in Text-Guided Diffusion for Image Manipulation [[PDF](https://arxiv.org/abs/2310.13165) ]\n\n[arxiv 2023.10]CustomNet: Zero-shot Object Customization with Variable-Viewpoints in Text-to-Image Diffusion Models[[PDF](https://arxiv.org/abs/2310.19784),[Page](https://jiangyzy.github.io/CustomNet/)]\n\n[arxiv 2023.11]LLaVA-Interactive: An All-in-One Demo for Image Chat, Segmentation, Generation and Editing [[PDF](https://arxiv.org/abs/2311.00571),[Page](https://llava-vl.github.io/llava-interactive/)]\n\n[arxiv 2023.11]The Blessing of Randomness: SDE Beats ODE in General Diffusion-based Image Editing[[PDF](https://arxiv.org/abs/2311.01410)]\n\n[arxiv 2023.11]FaceComposer: A Unified Model for Versatile Facial Content Creation [[PDF](https://openreview.net/pdf?id=xrK3QA9mLo)]\n\n[arxiv 2023.11]Emu Edit: Precise Image Editing via Recognition and Generation Tasks[[PDF](https://arxiv.org/abs/2311.10089)]\n\n[arxiv 2023.11]Fine-grained Appearance Transfer with Diffusion Models [[PDF](https://arxiv.org/abs/2311.16513), [Page](https://github.com/babahui/Fine-grained-Appearance-Transfer)]\n\n[arxiv 2023.11]Text-Driven Image Editing via Learnable Regions [[PDF](https://arxiv.org/abs/2311.16432), [Page](https://yuanze-lin.me/LearnableRegions_page)]\n\n[arxiv 2023.12]Contrastive Denoising Score for Text-guided Latent Diffusion Image Editing [[PDF](https://arxiv.org/abs/2311.18608),[Page](https://hyelinnam.github.io/CDS/)]\n\n[arxiv 2023.12]Smooth Diffusion: Crafting Smooth Latent Spaces in Diffusion Models [[PDF](https://arxiv.org/abs/2312.04410),[Page](https://github.com/SHI-Labs/Smooth-Diffusion)]\n\n[arxiv 2023.12]ControlNet-XS: Designing an Efficient and Effective Architecture for Controlling Text-to-Image Diffusion Models [[PDF](https://arxiv.org/abs/2312.06573)]\n\n[arxiv 2023.12]Emu Edit: Precise Image Editing via Recognition and Generation Tasks [[PDF](https://emu-edit.metademolab.com/assets/emu_edit.pdf),[Page](https://emu-edit.metademolab.com/)]\n\n[arxiv 2023.12]DiffMorpher: Unleashing the Capability of Diffusion Models for Image Morphing [[PDF](https://arxiv.org/abs/2312.07409)]\n\n[arxiv 2023.12]AdapEdit: Spatio-Temporal Guided Adaptive Editing Algorithm for Text-Based Continuity-Sensitive Image Editing [[PDF](https://arxiv.org/abs/2312.08019)]\n\n[arxiv 2023.12]LIME: Localized Image Editing via Attention Regularization in Diffusion Models [[PDF](https://arxiv.org/abs/2312.09256)]\n\n[arxiv 2023.12]Diffusion Cocktail: Fused Generation from Diffusion Models [[PDF](https://arxiv.org/abs/2312.08873)]\n\n[arxiv 2023.12]Prompting Hard or Hardly Prompting: Prompt Inversion for Text-to-Image Diffusion Models [[PDF](https://arxiv.org/abs/2312.12416)]\n\n[arxiv 2023.12]Fixed-point Inversion for Text-to-image diffusion models [[PDF](https://arxiv.org/abs/2312.12540)]\n\n[arxiv 2023.12]StreamDiffusion: A Pipeline-level Solution for Real-time Interactive Generation [[PDF](https://arxiv.org/abs/2312.12491)]\n\n[arxiv 2023.12]MAG-Edit: Localized Image Editing in Complex Scenarios via Mask-Based Attention-Adjusted Guidance [[PDF](https://arxiv.org/abs/2312.11396),[Page](https://mag-edit.github.io/)]\n\n[arxiv 2023.12]Tuning-Free Inversion-Enhanced Control for Consistent Image Editing [[PDF](https://arxiv.org/abs/2312.14611)]\n\n[arxiv 2023.12]High-Fidelity Diffusion-based Image Editing [[PDF](https://arxiv.org/abs/2312.15707)]\n\n[arxiv 2023.12]ZONE: Zero-Shot Instruction-Guided Local Editing [[PDF](https://arxiv.org/abs/2312.16794)]\n\n[arxiv 2024.1]PIXART-δ: Fast and Controllable Image Generation with Latent Consistency Models [[PDF](https://arxiv.org/abs/2401.05252)]\n\n[arxiv 2024.1]Wavelet-Guided Acceleration of Text Inversion in Diffusion-Based Image Editing [[PDF](https://arxiv.org/abs/2401.09794)]\n\n[arxiv 2024.1]Edit One for All: Interactive Batch Image Editing [[PDF](https://arxiv.org/abs/2401.10219),[Page](https://thaoshibe.github.io/edit-one-for-all/)]\n\n[arxiv 2024.01]UNIMO-G: Unified Image Generation through Multimodal Conditional Diffusion [[PDF](https://arxiv.org/abs/2401.13388)]\n\n[arxiv 2024.01]Text Image Inpainting via Global Structure-Guided Diffusion Models [[PDF](https://arxiv.org/abs/2401.14832)]\n\n[arxiv 2024.01]Motion Guidance: Diffusion-Based Image Editing with Differentiable Motion Estimators [[PDF](https://arxiv.org/abs/2401.18085)]\n\n[arxiv 2024.02]Latent Inversion with Timestep-aware Sampling for Training-free Non-rigid Editing [[PDf](https://arxiv.org/abs/2402.08601)]\n\n[arxiv 2024.02]DiLightNet: Fine-grained Lighting Control for Diffusion-based Image Generation[[PDF](https://arxiv.org/abs/2402.11929)]\n\n[arxiv 2024.02]CustomSketching: Sketch Concept Extraction for Sketch-based Image Synthesis and Editing [[PDF](https://arxiv.org/abs/2402.17624)]\n\n[arxiv 2024.03]Diff-Plugin: Revitalizing Details for Diffusion-based Low-level Tasks [[PDF](https://arxiv.org/abs/2403.00644)]\n\n[arxiv 2024.03]LoMOE: Localized Multi-Object Editing via Multi-Diffusion [[PDF](https://arxiv.org/abs/2403.00437)]\n\n[arxiv 2024.03]Towards Understanding Cross and Self-Attention in Stable Diffusion for Text-Guided Image Editing [[PDF](https://arxiv.org/abs/2403.03431)]\n\n[arxiv 2024.03]StableDrag: Stable Dragging for Point-based Image Editing[[PDF](https://arxiv.org/abs/2403.04437)]\n\n[arxiv 2024.03]InstructGIE: Towards Generalizable Image Editing [[PDF](https://arxiv.org/abs/2403.05018)]\n\n[arxiv 2024.03]An Item is Worth a Prompt: Versatile Image Editing with Disentangled Control [[PDF](https://arxiv.org/abs/2403.04880)]\n\n[arxiv 2024.03]Holo-Relighting: Controllable Volumetric Portrait Relighting from a Single Image [PDF(https://arxiv.org/abs/2403.09632)]\n\n[arxiv 2024.03]Editing Massive Concepts in Text-to-Image Diffusion Models [[PDF](https://arxiv.org/abs/2403.13807),[Page](https://silentview.github.io/EMCID/)]\n\n[arxiv 2024.03]Ground-A-Score: Scaling Up the Score Distillation for Multi-Attribute Editing [[PDF](https://arxiv.org/abs/2403.13551)]\n\n[arxiv 2024.03]Magic Fixup: Streamlining Photo Editing by Watching Dynamic Videos [[PDF](https://magic-fixup.github.io/magic_fixup.pdf),[Page](https://magic-fixup.github.io/)]\n\n[arxiv 2024.03]LASPA: Latent Spatial Alignment for Fast Training-free Single Image Editing [[PDF](https://arxiv.org/abs/2403.12585)]\n\n[arxiv 2024.03]ReNoise: Real Image Inversion Through Iterative Noising[[PDF](https://arxiv.org/abs/2403.14602),[Page](https://garibida.github.io/ReNoise-Inversion/)]\n\n[arxiv 2024.03]AID: Attention Interpolation of Text-to-Image Diffusion [[PDF](https://arxiv.org/abs/2403.17924),[Page](https://github.com/QY-H00/attention-interpolation-diffusion)]\n\n[arxiv 2024.03]InstructBrush: Learning Attention-based Instruction Optimization for Image Editing [[PDF](https://arxiv.org/abs/2403.18660)]\n\n[arxiv 2024.03]TextCraftor: Your Text Encoder Can be Image Quality Controller [[PDF](https://arxiv.org/abs/2403.18978)]\n\n[arxiv 2024.04]Drag Your Noise: Interactive Point-based Editing via Diffusion Semantic Propagation [[PDF](https://arxiv.org/abs/2404.01050),[Page](https://github.com/haofengl/DragNoise)]\n\n[arxiv 2024.04]Cross-Attention Makes Inference Cumbersome in Text-to-Image Diffusion Models [[PDF](https://arxiv.org/abs/2404.02747)]\n\n[arxiv 2024.04]SwapAnything: Enabling Arbitrary Object Swapping in Personalized Visual Editing [[PDF](https://arxiv.org/abs/2404.05717)]\n\n[arxiv 2024.04]Responsible Visual Editing [[PDF](https://arxiv.org/abs/2404.05580)]\n\n[arxiv 2024.04]ByteEdit: Boost, Comply and Accelerate Generative Image Editing [[PDF](https://arxiv.org/abs/2404.04860)]\n\n[arxiv 2024.04]ShoeModel: Learning to Wear on the User-specified Shoes via Diffusion Model [[PDF](https://arxiv.org/abs/2404.04833)]\n\n[arxiv 2024.040]GoodDrag: Towards Good Practices for Drag Editing with Diffusion Models[[PDF](https://arxiv.org/abs/2404.07206),[Page](https://arxiv.org/abs/2404.07206)]\n\n[arxiv 2024.04]HQ-Edit: A High-Quality Dataset for Instruction-based Image Editing [[PDF](https://arxiv.org/abs/2404.09990)]\n\n[arxiv 2024.04]MaxFusion: Plug&Play Multi-Modal Generation in Text-to-Image Diffusion Models [[PDF](https://arxiv.org/abs/2404.09977)]\n\n[arxiv 2024.04]Magic Clothing: Controllable Garment-Driven Image Synthesis [[PDF](https://arxiv.org/abs/2404.09512)]\n\n[arxiv 2024.04]Factorized Diffusion: Perceptual Illusions by Noise Decomposition [[PDF](https://arxiv.org/abs/2404.11615),[Page](https://dangeng.github.io/factorized_diffusion/)]\n\n[arxiv 2024.04]TiNO-Edit: Timestep and Noise Optimization for Robust Diffusion-Based Image Editing [[PDF](https://arxiv.org/abs/2404.11120)]\n\n[arxiv 2024.04]Lazy Diffusion Transformer for Interactive Image Editing [[PDF](https://arxiv.org/abs/2404.12382)]\n\n[arxiv 2024.04]FreeDiff: Progressive Frequency Truncation for Image Editing with Diffusion Models [[PDF](https://arxiv.org/abs/2404.11895)]\n\n[arxiv 2024.04]GeoDiffuser: Geometry-Based Image Editing with Diffusion Models [[PDF](https://arxiv.org/abs/2404.14403)]\n\n[arxiv 2024.04]LocInv: Localization-aware Inversion for Text-Guided Image Editing [[PDF](https://arxiv.org/abs/2405.01496)]\n\n[arxiv 2024.05]SonicDiffusion: Audio-Driven Image Generation and Editing with Pretrained Diffusion Models[[PDF](https://arxiv.org/abs/2405.00878)]\n\n[arxiv 2024.05]MMTryon: Multi-Modal Multi-Reference Control for High-Quality Fashion Generation [[PDF(https://arxiv.org/abs/2405.00448)]\n\n[arxiv 2024.05]Streamlining Image Editing with Layered Diffusion Brushes [[PDF](https://arxiv.org/abs/2405.00313)]\n\n[arxiv 2024.05]SOEDiff: Efficient Distillation for Small Object Editing [[PDF](https://arxiv.org/abs/2405.09114)]\n\n[arxiv 2024.05]Analogist: Out-of-the-box Visual In-Context Learning with Image Diffusion Model [[PDF](https://arxiv.org/abs/2405.10316),[Page](https://analogist2d.github.io/)]\n\n[arxiv 2024.05]Face Adapter for Pre-Trained Diffusion Models with Fine-Grained ID and Attribute Control [[PDF](https://arxiv.org/abs/2405.12970),[Page](https://faceadapter.github.io/face-adapter.github.io/)]\n\n[arxiv 2024.05] EmoEdit: Evoking Emotions through Image Manipulation  [[PDF](https://arxiv.org/abs/2405.12661)]\n\n[arxiv 2024.05] ReasonPix2Pix: Instruction Reasoning Dataset for Advanced Image Editing [[PDF](https://arxiv.org/abs/2405.11190)]\n\n[arxiv 2024.05] EditWorld: Simulating World Dynamics for Instruction-Following Image Editing [[PDF](https://arxiv.org/abs/2405.14785),[Page](https://github.com/YangLing0818/EditWorld)]\n\n[arxiv 2024.05]InstaDrag: Lightning Fast and Accurate Drag-based Image Editing Emerging from Videos [[PDF](https://arxiv.org/abs/2405.13722),[Page](https://instadrag.github.io/)]\n\n[arxiv 2024.05] FastDrag: Manipulate Anything in One Step [[PDF](https://arxiv.org/abs/2405.15769)]\n\n[arxiv 2024.05] Enhancing Text-to-Image Editing via Hybrid Mask-Informed Fusion  [[PDF](https://arxiv.org/abs/2405.15313)]\n\n\n[arxiv 2024.06] DiffUHaul: A Training-Free Method for Object Dragging in Images  [[PDF](https://arxiv.org/abs/2406.01594),[Page](https://omriavrahami.com/diffuhaul/)]\n\n[arxiv 2024.06]  MultiEdits: Simultaneous Multi-Aspect Editing with Text-to-Image Diffusion Models [[PDF](https://arxiv.org/abs/2406.00985),[Page](https://mingzhenhuang.com/projects/MultiEdits.html)]\n\n[arxiv 2024.06] Dreamguider: Improved Training free Diffusion-based Conditional Generation [[PDF](https://arxiv.org/abs/2406.02549),[Page](https://nithin-gk.github.io/dreamguider.github.io/)]\n\n[arxiv 2024.06]Zero-shot Image Editing with Reference Imitation [[PDF](https://arxiv.org/abs/2406.07547),[Page](https://xavierchen34.github.io/MimicBrush-Page)]\n\n[arxiv 2024.07] Image Inpainting Models are Effective Tools for Instruction-guided Image Editing[[PDF](https://arxiv.org/abs/2407.13139)]\n\n[arxiv 2024.07]Text2Place: Affordance-aware Text Guided Human Placement  [[PDF](https://arxiv.org/abs/2407.15446),[Page](https://rishubhpar.github.io/Text2Place/)]\n\n[arxiv 2024.07] RegionDrag: Fast Region-Based Image Editing with Diffusion Models[[PDF](https://arxiv.org/abs/2407.18247),[Page](https://visual-ai.github.io/regiondrag)]\n\n[arxiv 2024.07] FlexiEdit: Frequency-Aware Latent Refinement for Enhanced Non-Rigid Editing\n  [[PDF](https://arxiv.org/abs/2407.17850)]\n\n[arxiv 2024.07] DragText: Rethinking Text Embedding in Point-based Image Editing  [[PDF](https://arxiv.org/abs/2407.17843),[Page](https://micv-yonsei.github.io/dragtext2025/)]\n\n[arxiv 2024.08] MagicFace: Training-free Universal-Style Human Image Customized Synthesis  [[PDF](https://arxiv.org/abs/2408.07433),[Page](https://codegoat24.github.io/MagicFace)]\n\n[arxiv 2024.08] TurboEdit: Instant text-based image editing[[PDF](https://arxiv.org/abs/2408.08332),[Page](https://betterze.github.io/TurboEdit/)]\n\n[arxiv 2024.08] FlexEdit: Marrying Free-Shape Masks to VLLM for Flexible Image Editing  [[PDF](https://arxiv.org/abs/2408.12429),[Page](https://github.com/A-new-b/flex)]\n\n[arxiv 2024.08]  CODE: Confident Ordinary Differential Editing [[PDF](https://arxiv.org/abs/2408.12418),[Page](https://github.com/vita-epfl/CODE/)]\n\n[arxiv 2024.08]  Prompt-Softbox-Prompt: A free-text Embedding Control for Image Editing [[PDF](https://arxiv.org/abs/2408.13623)]\n\n[arxiv 2024.08]  Task-Oriented Diffusion Inversion for High-Fidelity Text-based Editing= [[PDF](https://arxiv.org/abs/2408.13395)]\n\n[arxiv 2024.08] DiffAge3D: Diffusion-based 3D-aware Face Aging [[PDF](https://arxiv.org/abs/2408.15922),]\n\n[arxiv 2024.09] Guide-and-Rescale: Self-Guidance Mechanism for Effective Tuning-Free Real Image Editing  [[PDF](https://arxiv.org/abs/2409.01322),[Page](https://github.com/FusionBrainLab/Guide-and-Rescale)]\n\n[arxiv 2024.09] InstantDrag: Improving Interactivity in Drag-based Image Editing  [[PDF](https://arxiv.org/abs/2409.08857),[Page](https://joonghyuk.com/instantdrag-web/)]\n\n[arxiv 2024.09] SimInversion: A Simple Framework for Inversion-Based Text-to-Image Editing  [[PDF](https://arxiv.org/abs/2409.10476)]\n\n\n[arxiv 2024.09]FreeEdit: Mask-free Reference-based Image Editing with Multi-modal Instruction [[PDF](https://arxiv.org/abs/2409.18071),[Page](https://freeedit.github.io/)]\n\n\n[arxiv 2024.09] GroupDiff: Diffusion-based Group Portrait Editing  [[PDF](https://arxiv.org/abs/2409.14379)]\n\n[arxiv 2024.10]  Combing Text-based and Drag-based Editing for Precise and Flexible Image Editing [[PDF](https://arxiv.org/abs/2410.03097)]\n\n[arxiv 2024.10] PostEdit: Posterior Sampling for Efficient Zero-Shot Image Editing  [[PDF](https://arxiv.org/abs/2410.04844),[Page](https://github.com/TFNTF/PostEdit)]\n\n[arxiv 2024.10] Context-Aware Full Body Anonymization using Text-to-Image Diffusion Models  [[PDF](https://arxiv.org/abs/2410.08551)]\n\n[arxiv 2024.10] BELM: Bidirectional Explicit Linear Multi-step Sampler for Exact Inversion in Diffusion Models  [[PDF](https://arxiv.org/abs/2410.07273)\n\n[arxiv 2024.10]  Vision-guided and Mask-enhanced Adaptive Denoising for Prompt-based Image Editing [[PDF](https://arxiv.org/html/2410.10496v1)]\n\n[arxiv 2024.10]MagicEraser: Erasing Any Objects via Semantics-Aware Control[[PDF](https://arxiv.org/abs/2410.10207)]\n\n[arxiv 2024.10]  SGEdit: Bridging LLM with Text2Image Generative Model for Scene Graph-based Image Editing [[PDF](https://arxiv.org/abs/2410.11815),[Page](https://bestzzhang.github.io/SGEdit/)]\n\n[arxiv 2024.10] AdaptiveDrag: Semantic-Driven Dragging on Diffusion-Based Image Editing [[PDF](https://arxiv.org/abs/2410.12696),[Page](https://github.com/Calvin11311/AdaptiveDrag)]\n\n[arxiv 2024.10] MambaPainter: Neural Stroke-Based Rendering in a Single Step[[PDF](https://arxiv.org/abs/2410.12524)]\n\n[arxiv 2024.10] ERDDCI: Exact Reversible Diffusion via Dual-Chain Inversion for High-Quality Image Editing  [[PDF](https://arxiv.org/abs/2410.14247)]\n\n[arxiv 2024.10] Schedule Your Edit: A Simple yet Effective Diffusion Noise Schedule for Image Editing  [[PDF](https://arxiv.org/abs/2410.18756)]\n\n[arxiv 2024.11]  DiT4Edit: Diffusion Transformer for Image Editing [[PDF](https://arxiv.org/abs/2411.03286),[Page](https://github.com/fkyyyy/DiT4Edit)]\n\n[arxiv 2024.11] ReEdit: Multimodal Exemplar-Based Image Editing with Diffusion Models  [[PDF](https://arxiv.org/abs/2411.03982)]\n\n[arxiv 2024.11] ProEdit: Simple Progression is All You Need for High-Quality 3D Scene Editing  [[PDF](https://arxiv.org/abs/2411.05006),[Page](https://immortalco.github.io/ProEdit/)]\n\n[arxiv 2024.11] Taming Rectified Flow for Inversion and Editing  [[PDF](https://arxiv.org/abs/2411.04746),[Page](https://github.com/wangjiangshan0725/RF-Solver-Edit)]\n\n[arxiv 2024.11] Multi-Reward as Condition for Instruction-based Image Editing  [[PDF](https://arxiv.org/abs/2411.04713)]\n\n[arxiv 2024.11] ColorEdit: Training-free Image-Guided Color editing with diffusion model  [[PDF](https://arxiv.org/abs/2411.10232),[Page]()]\n\n[arxiv 2024.11]  Test-time Conditional Text-to-Image Synthesis Using Diffusion Models [[PDF](https://arxiv.org/abs/2411.10800)]\n\n[arxiv 2024.11] HeadRouter: A Training-free Image Editing Framework for MM-DiTs by Adaptively Routing Attention Heads  [[PDF](https://arxiv.org/abs/2411.15034),[Page](https://yuci-gpt.github.io/headrouter/)] ![Code](https://img.shields.io/github/stars/ICTMCG/HeadRouter?style=social&label=Star)\n\n[arxiv 2024.11] Pathways on the Image Manifold: Image Editing via Video Generation  [[PDF](https://arxiv.org/abs/2411.16819)] \n\n[arxiv 2024.12] SOWing Information: Cultivating Contextual Coherence with MLLMs in Image Generation [[PDF](https://arxiv.org/abs/2411.19182),[Page](https://pyh-129.github.io/SOW/)] ![Code](https://img.shields.io/github/stars/wangruoyu02/COW?style=social&label=Star)\n\n[arxiv 2024.12]  PainterNet: Adaptive Image Inpainting with Actual-Token Attention and Diverse Mask Control [[PDF](https://arxiv.org/abs/2412.01223)]\n\n[arxiv 2024.12] InstantSwap: Fast Customized Concept Swapping across Sharp Shape Differences  [[PDF](https://arxiv.org/abs/2412.01197),[Page](https://instantswap.github.io/)] ![Code](https://img.shields.io/github/stars/chenyangzhu1/InstantSwap?style=social&label=Star)\n\n[arxiv 2024.12] FreeCond: Free Lunch in the Input Conditions of Text-Guided Inpainting  [[PDF](https://arxiv.org/abs/2412.00427),[Page](https://github.com/basiclab/FreeCond)] ![Code](https://img.shields.io/github/stars/basiclab/FreeCond?style=social&label=Star)\n\n[arxiv 2024.12]  Steering Rectified Flow Models in the Vector Field for Controlled Image Generation [[PDF](https://arxiv.org/abs/2412.00100),[Page](https://github.com/FlowChef/flowchef)] ![Code](https://img.shields.io/github/stars/FlowChef/flowchef?style=social&label=Star)\n\n[arxiv 2024.12]  BrushEdit: All-In-One Image Inpainting and Editing [[PDF](https://liyaowei-stu.github.io/project/BrushEdit/),[Page](https://liyaowei-stu.github.io/project/BrushEdit/)]\n\n[arxiv 2024.12]  Dual-Schedule Inversion: Training- and Tuning-Free Inversion for Real Image Editing [[PDF](https://arxiv.org/pdf/2412.11152)]\n\n[arxiv 2024.12]  Prompt Augmentation for Self-supervised Text-guided Image Manipulation [[PDF](https://arxiv.org/pdf/2412.13081)]\n\n[arxiv 2024.12] PixelMan: Consistent Object Editing with Diffusion Models via Pixel Manipulation and Generation  [[PDF](https://arxiv.org/abs/2412.14283),[Page](Ascend-Research/PixelMan)] \n\n[arxiv 2024.12]  Explaining in Diffusion: Explaining a Classifier Through Hierarchical Semantics with Text-to-Image Diffusion Models [[PDF](https://arxiv.org/abs/2412.18604),[Page](https://explain-in-diffusion.github.io/)] \n\n[arxiv 2025.01] Edicho: Consistent Image Editing in the Wild  [[PDF](https://arxiv.org/abs/2412.21079),[Page](https://ezioby.github.io/edicho/)] ![Code](https://img.shields.io/github/stars/EzioBy/edicho?style=social&label=Star)\n\n[arxiv 2025.01]  Exploring Optimal Latent Trajetory for Zero-shot Image Editing [[PDF](https://arxiv.org/pdf/2501.03631)]\n\n[arxiv 2025.01] FramePainter: Endowing Interactive Image Editing with Video Diffusion Priors  [[PDF](https://arxiv.org/abs/2501.08225),[Page](https://github.com/YBYBZhang/FramePainter)] ![Code](https://img.shields.io/github/stars/YBYBZhang/FramePainter?style=social&label=Star)\n\n[arxiv 2025.02] PartEdit: Fine-Grained Image Editing using Pre-Trained Diffusion Models  [[PDF](https://arxiv.org/abs/2502.04050),[Page](https://partedit.github.io/PartEdit/)]\n\n[arxiv 2025.02] PhotoDoodle: Learning Artistic Image Editing from Few-Shot Pairwise Data  [[PDF](https://arxiv.org/abs/2502.14397),[Page](https://github.com/showlab/PhotoDoodle)] ![Code](https://img.shields.io/github/stars/showlab/PhotoDoodle?style=social&label=Star)\n\n[arxiv 2025.02] KV-Edit: Training-Free Image Editing for Precise Background Preservation  [[PDF](https://arxiv.org/abs/2502.17363),[Page](https://xilluill.github.io/projectpages/KV-Edit/)] ![Code](https://img.shields.io/github/stars/Xilluill/KV-Edit?style=social&label=Star)\n\n[arxiv 2025.02]  Tight Inversion: Image-Conditioned Inversion for Real Image Editing [[PDF](https://arxiv.org/pdf/2502.20376)]\n\n[arxiv 2025.03] DiffBrush:Just Painting the Art by Your Hands  [[PDF](https://arxiv.org/abs/2502.20904)]\n\n[arxiv 2025.03]  h-Edit: Effective and Flexible Diffusion-Based Editing via Doob’s h-Transform [[PDF](https://arxiv.org/pdf/2503.02187),[Page](https://github.com/nktoan/h-edit)] ![Code](https://img.shields.io/github/stars/nktoan/h-edit?style=social&label=Star)\n\n[arxiv 2025.03] InteractEdit: Zero-Shot Editing of Human-Object Interactions in Images  [[PDF](https://arxiv.org/pdf/2503.09130),[Page](https://jiuntian.github.io/interactedit/)] ![Code](https://img.shields.io/github/stars/jiuntian/interactedit?style=social&label=Star)\n\n[arxiv 2025.03] MoEdit: On Learning Quantity Perception for Multi-object Image Editing  [[PDF](https://arxiv.org/abs/2503.10112),[Page](https://github.com/Tear-kitty/MoEdit)] ![Code](https://img.shields.io/github/stars/Tear-kitty/MoEdit?style=social&label=Star)\n\n[arxiv 2025.03]  Edit Transfer: Learning Image Editing via Vision In-Context Relations [[PDF](https://arxiv.org/abs/2503.13327),[Page](https://cuc-mipg.github.io/EditTransfer.github.io/)] ![Code](https://img.shields.io/github/stars/CUC-MIPG/Edit-Transfer?style=social&label=Star)\n\n[arxiv 2025.03] Single Image Iterative Subject-driven Generation and Editing  [[PDF](https://arxiv.org/pdf/2503.16025)]\n\n[arxiv 2025.03] Adams Bashforth Moulton Solver for Inversion and Editing in Rectified Flow  [[PDF](https://arxiv.org/pdf/2503.16522)]\n\n[arxiv 2025.03] EditCLIP: Representation Learning for Image Editing  [[PDF](https://arxiv.org/abs/2503.20318),[Page](https://qianwangx.github.io/EditCLIP/)] ![Code](https://img.shields.io/github/stars/QianWangX/EditCLIP?style=social&label=Star)\n\n[arxiv 2025.04]  FreeInv: Free Lunch for Improving DDIM Inversion [[PDF](https://arxiv.org/pdf/2503.23035),[Page](https://yuxiangbao.github.io/FreeInv/)] \n\n[arxiv 2025.04] TurboFill: Adapting Few-step Text-to-image Model for Fast Image Inpainting  [[PDF](https://arxiv.org/abs/2504.00996),[Page](https://liangbinxie.github.io/projects/TurboFill/)]\n\n[arxiv 2025.04]  UNIEDIT-FLOW: Unleashing Inversion and Editing in the Era of Flow Models [[PDF](https://arxiv.org/pdf/2504.13109),[Page](https://uniedit-flow.github.io/)] \n\n[arxiv 2025.04]  SPICE: A Synergistic, Precise, Iterative, and Customizable Image Editing Workflow [[PDF](https://arxiv.org/abs/2504.09697),[Page](https://kenantang.github.io/spice/)] ![Code](https://img.shields.io/github/stars/kenantang/spice?style=social&label=Star)\n\n[arxiv 2025.05] InstructAttribute: Fine-grained Object Attributes editing with Instruction  [[PDF](https://arxiv.org/html/2505.00751v1)]\n\n[arxiv 2025.05] Towards Scalable Human-aligned Benchmark for Text-guided Image Editing  [[PDF](https://arxiv.org/abs/2505.00502),[Page](https://github.com/SuhoRyu/HATIE)] ![Code](https://img.shields.io/github/stars/SuhoRyu/HATIE?style=social&label=Star)\n\n[arxiv 2025.05]  PixelHacker: Image Inpainting with Structural and Semantic Consistency [[PDF](https://arxiv.org/abs/2504.20438),[Page](https://hustvl.github.io/PixelHacker/)] ![Code](https://img.shields.io/github/stars/hustvl/PixelHacker?style=social&label=Star)\n\n[arxiv 2025.05]  CompleteMe: Reference-based Human Image Completion [[PDF](https://arxiv.org/pdf/2504.20042)]\n\n[arxiv 2025.05] SuperEdit: Rectifying and Facilitating Supervision for Instruction-Based Image Editing  [[PDF](https://arxiv.org/abs/2505.02370),[Page](https://liming-ai.github.io/SuperEdit)] ![Code](https://img.shields.io/github/stars/bytedance/SuperEdit?style=social&label=Star)\n\n[arxiv 2025.05] Multi-turn Consistent Image Editing  [[PDF](https://arxiv.org/pdf/2505.04320)]\n\n[arxiv 2025.05] MDE-Edit: Masked Dual-Editing for Multi-Object Image Editing via Diffusion Models  [[PDF](https://arxiv.org/pdf/2505.05101)]\n\n[arxiv 2025.05]  R-Genie: Reasoning-Guided Generative Image Editing [[PDF](https://arxiv.org/abs/2505.17768),[Page](https://dongzhang89.github.io/RGenie.github.io/)] ![Code](https://img.shields.io/github/stars/HE-Lingfeng/R-Genie-public?style=social&label=Star)\n\n[arxiv 2025.06]  EDITOR: Effective and Interpretable Prompt Inversion for Text-to-Image Diffusion Models [[PDF](https://arxiv.org/abs/2506.03067)]\n\n[arxiv 2025.06] DCI: Dual-Conditional Inversion for Boosting Diffusion-Based Image Editing  [[PDF](https://arxiv.org/abs/2506.02560)]\n\n[arxiv 2025.06]  Image Editing As Programs with Diffusion Models [[PDF](https://arxiv.org/abs/2506.04158),[Page](https://github.com/YujiaHu1109/IEAP)] ![Code](https://img.shields.io/github/stars/YujiaHu1109/IEAP?style=social&label=Star)\n\n[arxiv 2025.06]  PairEdit: Learning Semantic Variations for Exemplar-based Image Editing [[PDF](https://arxiv.org/pdf/2506.07992),[Page](https://github.com/xudonmao/PairEdit)] ![Code](https://img.shields.io/github/stars/xudonmao/PairEdit?style=social&label=Star)\n\n[arxiv 2025.06] DragNeXt: Rethinking Drag-Based Image Editing  [[PDF](https://arxiv.org/abs/2506.07611),[Page](https://arxiv.org/pdf/2506.07611)] \n\n[arxiv 2025.06]  AttentionDrag: Exploiting Latent Correlation Knowledge in Pre-trained Diffusion Models for Image Editing [[PDF](https://arxiv.org/abs/2506.13301),[Page](https://github.com/GPlaying/AttentionDrag)] ![Code](https://img.shields.io/github/stars/GPlaying/AttentionDrag?style=social&label=Star)\n\n[arxiv 2025.06] CPAM: Context-Preserving Adaptive Manipulation for Zero-Shot Real Image Editing  [[PDF](https://arxiv.org/pdf/2506.18438)]\n\n[arxiv 2025.07] Beyond Simple Edits: X-Planner for Complex Instruction-Based Image Editing  [[PDF](http://arxiv.org/abs/2507.05259),[Page](https://danielchyeh.github.io/x-planner/)] \n\n[arxiv 2025.07] Stable Score Distillation  [[PDF](http://arxiv.org/abs/2507.09168),[Page](https://github.com/Alex-Zhu1/SSD)] ![Code](https://img.shields.io/github/stars/Alex-Zhu1/SSD?style=social&label=Star)\n\n[arxiv 2025.08]FLUX-Makeup: High-Fidelity, Identity-Consistent, and Robust Makeup Transfer via Diffusion Transformer[[PDF](https://arxiv.org/pdf/2508.05069)]\n\n[arxiv 2025.08]  Follow-Your-Shape: Shape-Aware Image Editing via Trajectory-Guided Region Control [[PDF](https://arxiv.org/abs/2508.08134),[Page](https://follow-your-shape.github.io/)] ![Code](https://img.shields.io/github/stars/mayuelala/FollowYourShape?style=social&label=Star)\n\n[arxiv 2025.08] Exploring Multimodal Diffusion Transformers for Enhanced Prompt-based Image Editing  [[PDF](https://arxiv.org/pdf/2508.07519)]\n\n[arxiv 2025.08] Training-Free Text-Guided Color Editing with Multi-Modal Diffusion Transformer  [[PDF](https://arxiv.org/pdf/2508.09131),[Page](https://zxyin.github.io/ColorCtrl/)] \n\n[arxiv 2025.08]  TweezeEdit: Consistent and Efficient Image Editing with Path Regularization [[PDF](https://arxiv.org/abs/2508.10498)]\n\n[arxiv 2025.09]  Inpaint4Drag: Repurposing Inpainting Models for Drag-Based Image Editing via Bidirectional Warping [[PDF](https://arxiv.org/abs/2509.04582),[Page](https://visual-ai.github.io/inpaint4drag/)] ![Code](https://img.shields.io/github/stars/Visual-AI/Inpaint4Drag?style=social&label=Star)\n\n[arxiv 2025.09] LazyDrag: Enabling Stable Drag-Based Editing on Multi-Modal Diffusion Transformers via Explicit Correspondence  [[PDF](https://arxiv.org/abs/2509.12203),[Page](https://zxyin.github.io/LazyDrag)] \n\n[arxiv 2025.09]  AutoEdit: Automatic Hyperparameter Tuning for Image Editing [[PDF](https://arxiv.org/abs/2509.15031)]\n\n[arxiv 2025.09] FlashEdit: Decoupling Speed, Structure, and Semantics for Precise Image Editing  [[PDF](https://arxiv.org/abs/2509.22244),[Page](https://github.com/JunyiWuCode/FlashEdit)] ![Code](https://img.shields.io/github/stars/JunyiWuCode/FlashEdit?style=social&label=Star)\n\n[arxiv 2025.09]  TDEdit: A Unified Diffusion Framework for Text-Drag Guided Image Manipulation [[PDF](https://arxiv.org/abs/2509.21905)]\n\n[arxiv 2025.10]  IMAGEdit: Let Any Subject Transform [[PDF](https://arxiv.org/abs/2510.01186),[Page](https://muzishen.github.io/IMAGEdit/)] ![Code](https://img.shields.io/github/stars/XWH-A/IMAGEdit?style=social&label=Star)\n\n[arxiv 2025.10] EditTrack: Detecting and Attributing AI-assisted Image Editing  [[PDF](https://arxiv.org/abs/2510.01173)]\n\n[arxiv 2025.10]  Object-AVEdit: An Object-level Audio-Visual Editing Model [[PDF](https://arxiv.org/abs/2510.00050)]\n\n[arxiv 2025.10] Optimal Control Meets Flow Matching: A Principled Route to Multi-Subject Fidelity  [[PDF](https://arxiv.org/abs/2510.02315),[Page](https://github.com/ericbill21/FOCUS/)] ![Code](https://img.shields.io/github/stars/ericbill21/FOCUS/?style=social&label=Star)\n\n[arxiv 2025.10]  DragFlow: Unleashing DiT Priors with Region Based Supervision for Drag Editing [[PDF](https://arxiv.org/abs/2510.02253)]\n\n[arxiv 2025.10]  TBStar-Edit: From Image Editing Pattern Shifting to Consistency Enhancement [[PDF](https://arxiv.org/abs/2510.04483)]\n\n[arxiv 2025.10] SAEdit: Token-level control for continuous image editing via Sparse AutoEncoder  [[PDF](https://arxiv.org/abs/2510.05081),[Page](https://ronen94.github.io/SAEdit/)] \n\n[arxiv 2025.10]  Fine-grained Defocus Blur Control for Generative Image Models [[PDF](https://arxiv.org/abs/2510.06215),[Page](https://www.ayshrv.com/defocus-blur-gen)] \n\n[arxiv 2025.10]  Kontinuous Kontext: Continuous Strength Control for Instruction-based Image Editing [[PDF](https://arxiv.org/abs/2510.08532),[Page](https://snap-research.github.io/kontinuouskontext/#)] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)\n\n[arxiv 2025.10] One Stone with Two Birds: A Null-Text-Null Frequency-Aware Diffusion Models for Text-Guided Image Inpainting [[PDF](https://arxiv.org/abs/2510.08273),[Page](https://github.com/htyjers/NTN-Diff)] ![Code](https://img.shields.io/github/stars/htyjers/NTN-Diff?style=social&label=Star)\n\n[arxiv 2025.10] Learning an Image Editing Model without Image Editing Pairs  [[PDF](https://arxiv.org/abs/2510.14978),[Page](https://nupurkmr9.github.io/npedit/)] \n\n[arxiv 2025.10] ConsistEdit: Highly Consistent and Precise Training-free Visual Editing  [[PDF](https://arxiv.org/abs/2510.17803),[Page](https://zxyin.github.io/ConsistEdit/)] ![Code](https://img.shields.io/github/stars/zxYin/ConsistEdit_Code?style=social&label=Star)\n\n[arxiv 2025.10] FlowCycle: Pursuing Cycle-Consistent Flows for Text-based Editing  [[PDF](https://arxiv.org/abs/2510.20212),[Page](https://github.com/HKUST-LongGroup/FlowCycle)] ![Code](https://img.shields.io/github/stars/HKUST-LongGroup/FlowCycle?style=social&label=Star)\n\n[arxiv 2025.10]  Group-Relative Attention Guidance for Image Editing [[PDF](https://arxiv.org/abs/2510.24657),[Page](https://github.com/little-misfit/GRAG-Image-Editing)] ![Code](https://img.shields.io/github/stars/little-misfit/GRAG-Image-Editing?style=social&label=Star)\n\n[arxiv 2025.10]  RegionE: Adaptive Region-Aware Generation for Efficient Image Editing [[PDF](https://arxiv.org/abs/2510.25590),[Page](https://github.com/Peyton-Chen/RegionE)] ![Code](https://img.shields.io/github/stars/Peyton-Chen/RegionE?style=social&label=Star)\n\n[arxiv 2025.10]  SplitFlow: Flow Decomposition for Inversion-Free Text-to-Image Editing [[PDF](https://arxiv.org/abs/2510.25970)]\n\n[arxiv 2025.11]  Personalized Image Editing in Text-to-Image Diffusion Models via Collaborative Direct Preference Optimization [[PDF](https://personalized-editing.github.io/),[Page](https://personalized-editing.github.io/)] \n\n[arxiv 2025.11] Are Image-to-Video Models Good Zero-Shot Image Editors?  [[PDF](https://arxiv.org/pdf/2511.19435)]\n\n[arxiv 2025.11]  Video4Edit: Viewing Image Editing as a Degenerate Temporal Process [[PDF](https://arxiv.org/pdf/2511.18131)]\n\n[arxiv 2025.12] FreqEdit: Preserving High-Frequency Features for Robust Multi-Turn Image Editing  [[PDF](https://arxiv.org/abs/2512.01755),[Page](https://freqedit.github.io/)] ![Code](https://img.shields.io/github/stars/FreqEdit/FreqEdit?style=social&label=Star)\n\n[arxiv 2025.12]  ProEdit: Inversion-based Editing From Prompts Done Right [[PDF](https://arxiv.org/abs/2512.22118),[Page](https://isee-laboratory.github.io/ProEdit/)] ![Code](https://img.shields.io/github/stars/iSEE-Laboratory/ProEdit?style=social&label=Star)\n\n[arxiv 2025.12] On Exact Editing of Flow-Based Diffusion Models  [[PDF](https://arxiv.org/pdf/2512.24015)]\n\n[arxiv 2026.01] Talk2Move: Reinforcement Learning for Text-Instructed Object-Level Geometric Transformation in Scenes  [[PDF](https://arxiv.org/abs/2601.02356),[Page](https://sparkstj.github.io/talk2move/)] ![Code](https://img.shields.io/github/stars/sparkstj/Talk2Move?style=social&label=Star)\n\n[arxiv 2026.01]  Unraveling MMDiT Blocks: Training-free Analysis and Enhancement of Text-conditioned Diffusion [[PDF](https://arxiv.org/pdf/2601.02211)]\n\n[arxiv 2026.01]  TalkPhoto: A Versatile Training-Free Conversational Assistant for Intelligent Image Editing [[PDF](https://arxiv.org/abs/2601.01915)]\n\n[arxiv 2026.02]  Controlling Your Image via Simplified Vector Graphics [[PDF](https://arxiv.org/abs/2602.14443),[Page](https://guolanqing.github.io/Vec2Pix/)] \n\n[arxiv 2026.02] Instruction-based Image Editing with Planning, Reasoning, and Generation  [[PDF](https://arxiv.org/pdf/2602.22624)]\n\n[arxiv 2026.03] CARE-Edit: Condition-Aware Routing of Experts for Contextual Image Editing  [[PDF](https://arxiv.org/abs/2603.08589),[Page](https://care-edit.github.io/)] ![Code](https://img.shields.io/github/stars/CARE-Edit/Code?style=social&label=Star)\n\n[arxiv 2026.03] VeloEdit: Training-Free Consistent and Continuous Instruction-Based Image Editing via Velocity Field Decomposition  [[PDF](https://arxiv.org/abs/2603.13388),[Page](https://github.com/xmulzq/VeloEdit)] ![Code](https://img.shields.io/github/stars/xmulzq/VeloEdit?style=social&label=Star)\n\n[arxiv 2026.03] MSRAMIE: Multimodal Structured Reasoning Agent for Multi-instruction Image Editing  [[PDF](https://arxiv.org/abs/2603.16967)]\n\n[arxiv 2026.03] Diffusion-Based Makeup Transfer with Facial Region-Aware Makeup Features [[[PDF](https://arxiv.org/abs/2603.20012)]]\n\n[arxiv 2026.03] AdaEdit: Adaptive Temporal and Channel Modulation for Flow-Based Image Editing  [[PDF](https://arxiv.org/abs/2603.21615)] ![Code](https://img.shields.io/github/stars/leeguandong/AdaEdit?style=social&label=Star)\n\n[arxiv 2026.03] Group Editing: Edit Multiple Images in One Go  [[PDF](https://arxiv.org/abs/2603.22883),[Page](https://group-editing.github.io/)] ![Code](https://img.shields.io/github/stars/mayuelala/GroupEditing?style=social&label=Star)\n\n\n[arxiv 2026.03]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)\n\n## end of editing \n\n\n## Analysis\n[arxiv 2025.02]  SliderSpace: Decomposing the Visual Capabilities of Diffusion Models [[PDF](https://arxiv.org/pdf/2502.01639),[Page](https://sliderspace.baulab.info/)] \n\n[arxiv 2026.03]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)\n\n\n## reason\n[arxiv 2025.06]  MMMG: A Massive, Multidisciplinary, Multi-Tier Generation Benchmark for Text-to-Image Reasoning [[PDF](https://arxiv.org/abs/2506.10963),[Page](https://mmmgbench.github.io/)] ![Code](https://img.shields.io/github/stars/MMMGBench/MMMG/?style=social&label=Star)\n\n[arxiv 2025.07]  Reasoning to Edit: Hypothetical Instruction-Based Image Editing with Visual Reasoning [[PDF](https://arxiv.org/abs/2507.01908),[Page](https://github.com/hithqd/ReasonBrain)] ![Code](https://img.shields.io/github/stars/hithqd/ReasonBrain?style=social&label=Star)\n\n[arxiv 2025.09] FLUX-Reason-6M & PRISM-Bench: A Million-Scale Text-to-Image Reasoning Dataset and Comprehensive Benchmark  [[PDF](https://arxiv.org/abs/2509.09680),[Page](https://flux-reason-6m.github.io/)] ![Code](https://img.shields.io/github/stars/rongyaofang/prism-bench?style=social&label=Star)\n\n[arxiv 2025.12] MIRA: Multimodal Iterative Reasoning Agent for Image Editing  [[PDF](https://arxiv.org/abs/2511.21087),[Page](https://zzzmyyzeng.github.io/MIRA/)] \n\n[arxiv 2025.12]  ThinkGen: Generalized Thinking for Visual Generation [[PDF](https://arxiv.org/pdf/2512.23568),[Page](https://github.com/jiaosiyuu/ThinkGen)] ![Code](https://img.shields.io/github/stars/jiaosiyuu/ThinkGen?style=social&label=Star)\n\n[arxiv 2025.12] DiffThinker: Towards Generative Multimodal Reasoning with Diffusion Models  [[PDF](https://arxiv.org/abs/2512.24165),[Page](https://diffthinker-project.github.io/)] ![Code](https://img.shields.io/github/stars/lcqysl/DiffThinker?style=social&label=Star)\n\n[arxiv 2026.01] Unified Thinker: A General Reasoning Modular Core for Image Generation  [[PDF](https://arxiv.org/pdf/2601.03127),[Page](https://github.com/alibaba/UnifiedThinker)] ![Code](https://img.shields.io/github/stars/alibaba/UnifiedThinker?style=social&label=Star)\n\n[arxiv 2026.01]  ThinkRL-Edit: Thinking in Reinforcement Learning for Reasoning-Centric Image Editing [[PDF](https://arxiv.org/abs/2601.03467)]\n\n[arxiv 2026.01] Think-Then-Generate: Reasoning-Aware Text-to-Image Diffusion with LLM Encoders  [[PDF](https://arxiv.org/abs/2601.10332),[Page](https://github.com/SJTU-DENG-Lab/Think-Then-Generate)] ![Code](https://img.shields.io/github/stars/SJTU-DENG-Lab/Think-Then-Generate?style=social&label=Star)\n\n[arxiv 2026.01] UniReason 1.0: A Unified Reasoning Framework for World Knowledge Aligned Image Generation and Editing  [[PDF](https://arxiv.org/abs/2602.02437),[Page](https://github.com/AlenjandroWang/UniReason)] ![Code](https://img.shields.io/github/stars/AlenjandroWang/UniReason?style=social&label=Star)\n\n[arxiv 2026.02] Uni-Animator: Towards Unified Visual Colorization  [[PDF](https://arxiv.org/pdf/2602.23191)]\n\n[arxiv 2026.03]  Generative Visual Chain-of-Thought for Image Editing [[PDF](https://arxiv.org/abs/2603.01893),[Page](https://pris-cv.github.io/GVCoT/)] ![Code](https://img.shields.io/github/stars/PRIS-CV/GVCoT?style=social&label=Star)\n\n[arxiv 2026.03] GRADE: Benchmarking Discipline-Informed Reasoning in Image Editing  [[PDF](https://arxiv.org/abs/2603.12264),[Page](https://grade-bench.github.io/)] ![Code](https://img.shields.io/github/stars/VisionXLab/GRADE?style=social&label=Star)\n\n[arxiv 2026.03]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)\n\n\n## Unified Generation\n\n[arxiv 2024.10] A Simple Approach to Unifying Diffusion-based Conditional Generation [[PDF](https://arxiv.org/abs/2410.11439),[Page](https://lixirui142.github.io/unicon-diffusion/)] ![Code](https://img.shields.io/github/stars/lixirui142/UniCon?style=social&label=Star)\n\n[arxiv 2024.11] One Diffusion to Generate Them All  [[PDF](https://arxiv.org/abs/2411.16318),[Page](https://github.com/lehduong/OneDiffusion)] ![Code](https://img.shields.io/github/stars/lehduong/OneDiffusion?style=social&label=Star)\n\n[arxiv 2024.11] OminiControl: Minimal and Universal Control for Diffusion Transformer  [[PDF](https://arxiv.org/abs/2411.15098),[Page](https://github.com/Yuanshi9815/OminiControl)] ![Code](https://img.shields.io/github/stars/Yuanshi9815/OminiControl?style=social&label=Star)\n\n[arxiv 2024.12] Adaptive Blind All-in-One Image Restoration  [[PDF](https://arxiv.org/abs/2411.18412),[Page](https://aba-ir.github.io/)] ![Code](https://img.shields.io/github/stars/davidserra9/abair/?style=social&label=Star)\n\n[arxiv 2024.12] OmniFlow: Any-to-Any Generation with Multi-Modal Rectified Flows  [[PDF](https://arxiv.org/abs/2412.01169),[Page]()] ![Code](https://img.shields.io/github/stars/jacklishufan/OmniFlows?style=social&label=Star)\n\n[arxiv 2024.12] UniReal: Universal Image Generation and Editing via Learning Real-world Dynamics  [[PDF](https://arxiv.org/abs/2412.07774),[Page](https://xavierchen34.github.io/UniReal-Page/)]\n\n[arxiv 2024.12]  BrushEdit: All-In-One Image Inpainting and Editing [[PDF](https://liyaowei-stu.github.io/project/BrushEdit/),[Page](https://liyaowei-stu.github.io/project/BrushEdit/)]\n\n[arxiv 2024.12] Multimodal Understanding and Generation via Instruction Tuning  [[PDF](https://arxiv.org/abs/2412.14164v1),[Page](https://tsb0601.github.io/metamorph/)] \n\n[arxiv 2024.12]  DreamOmni: Unified Image Generation and Editing [[PDF](https://arxiv.org/pdf/2412.17098),[Page](https://zj-binxia.github.io/DreamOmni-ProjectPage/)] \n\n[arxiv 2025.01] ACE++: Instruction-Based Image Creation and Editing via Context-Aware Content Filling  [[PDF](https://arxiv.org/abs/2501.02487),[Page](https://ali-vilab.github.io/ACE_plus_page/)] ![Code](https://img.shields.io/github/stars/ali-vilab/ACE_plus?style=social&label=Star)\n\n[arxiv 2025.01] EditAR: Unified Conditional Generation with Autoregressive Models  [[PDF](https://arxiv.org/abs/2501.04699),[Page](https://jitengmu.github.io/EditAR/)] ![Code](https://img.shields.io/github/stars/JitengMu/EditAR?style=social&label=Star)\n\n[arxiv 2025.03] MIGE: A Unified Framework for Multimodal Instruction-Based Image Generation and Editing  [[PDF](https://arxiv.org/abs/2502.21291),[Page](https://github.com/Eureka-Maggie/MIGE)] ![Code](https://img.shields.io/github/stars/Eureka-Maggie/MIGE?style=social&label=Star)\n\n[arxiv 2025.03]  WeGen: A Unified Model for Interactive Multimodal Generation as We Chat [[PDF](https://arxiv.org/pdf/2503.01115),[Page](https://github.com/hzphzp/WeGen)] ![Code](https://img.shields.io/github/stars/hzphzp/WeGen?style=social&label=Star) \n\n[arxiv 2025.03] UniCombine: Unified Multi-Conditional Combination with Diffusion Transformer  [[PDF](https://arxiv.org/pdf/2503.09277),[Page](https://github.com/Xuan-World/UniCombine)] ![Code](https://img.shields.io/github/stars/Xuan-World/UniCombine?style=social&label=Star)\n\n[arxiv 2025.03] RealGeneral: Unifying Visual Generation via Temporal In-Context Learning with Video Models  [[PDF](https://arxiv.org/pdf/2503.10406),[Page](https://lyne1.github.io/RealGeneral/)] !\n\n[arxiv 2025.03]  BlobCtrl: A Unified and Flexible Framework for Element-level Image Generation and Editing [[PDF](https://arxiv.org/pdf/2503.13434),[Page](https://liyaowei-stu.github.io/project/BlobCtrl/)] ![Code](https://img.shields.io/github/stars/TencentARC/BlobCtrl?style=social&label=Star)\n\n[arxiv 2025.04] VisualCloze : A Universal Image Generation Framework via Visual In-Context Learning  [[PDF](https://arxiv.org/abs/2504.07960),[Page](https://visualcloze.github.io/)] ![Code](https://img.shields.io/github/stars/lzyhha/VisualCloze?style=social&label=Star)\n\n[arxiv 2025.04]  Step1X-Edit: A Practical Framework for General Image Editing [[PDF](https://arxiv.org/abs/2504.17761),[Page](https://github.com/stepfun-ai/Step1X-Edit)] ![Code](https://img.shields.io/github/stars/stepfun-ai/Step1X-Edit?style=social&label=Star)\n\n[arxiv 2025.05]  In-Context Edit: Enabling Instructional Image Editing with In-Context Generation in Large Scale Diffusion Transformer [[PDF](https://arxiv.org/pdf/2504.20690),[Page](https://river-zhang.github.io/ICEdit-gh-pages/)] ![Code](https://img.shields.io/github/stars/River-Zhang/ICEdit?style=social&label=Star)\n\n[arxiv 2025.06]  SeedEdit 3.0: Fast and High-Quality Generative Image Editing [[PDF](https://arxiv.org/abs/2506.05083),[Page](https://seed.bytedance.com/zh/tech/seededit)] \n\n[arxiv 2025.06]  VINCIE: Unlocking In-context Image Editing from Video [[PDF](https://arxiv.org/pdf/2506.10941),[Page](https://vincie2025.github.io/)] \n\n[arxiv 2025.06] FLUX.1 Kontext: Flow Matching for In-Context Image Generation and Editing in Latent Space  [[PDF](https://arxiv.org/abs/2506.15742),[Page](https://huggingface.co/black-forest-labs/FLUX.1-Kontext-dev)] ![Code](https://img.shields.io/github/stars/black-forest-labs/flux?style=social&label=Star)\n\n[arxiv 2025.07]  UniLDiff: Unlocking the Power of Diffusion Priors for All-in-One Image Restoration [[PDF](https://arxiv.org/abs/2507.23685)]\n\n[arxiv 2025.08] UniEdit-I: Training-free Image Editing for Unified VLM via Iterative Understanding, Editing and Verifying  [[PDF](https://arxiv.org/pdf/2508.03142)]\n\n[arxiv 2025.08]  EvoMakeup: High-Fidelity and Controllable Makeup Editing with MakeupQuad [[PDF](https://arxiv.org/pdf/2508.05994)]\n\n[arxiv 2025.09]  Lego-Edit: A General Image Editing Framework with Model-Level Bricks and MLLM Builder [[PDF](https://arxiv.org/abs/2509.12883),[Page](https://xiaomi-research.github.io/lego-edit/)] ![Code](https://img.shields.io/github/stars/xiaomi-research/lego-edit?style=social&label=Star)\n\n[arxiv 2025.09] MultiEdit: Advancing Instruction-based Image Editing on Diverse and Challenging Tasks  [[PDF](https://arxiv.org/abs/2509.14638),[Page](https://huggingface.co/datasets/inclusionAI/MultiEdit)] \n\n[arxiv 2025.09] UniVid: Unifying Vision Tasks with Pre-trained Video Generation Models  [[PDF](https://arxiv.org/abs/2509.21760),[Page](https://github.com/CUC-MIPG/UniVid)] ![Code](https://img.shields.io/github/stars/CUC-MIPG/UniVid?style=social&label=Star)\n\n[arxiv 2025.10] Query-Kontext: An Unified Multimodal Model for Image Generation and Editing  [[PDF](https://arxiv.org/abs/2509.26641)]\n\n[arxiv 2025.10]  ChronoEdit: Towards Temporal Reasoning for Image Editing and World Simulation [[PDF](https://arxiv.org/abs/2510.04290),[Page](https://research.nvidia.com/labs/toronto-ai/chronoedit)] ![Code](https://img.shields.io/github/stars/nv-tlabs/ChronoEdit?style=social&label=Star)\n\n[arxiv 2025.10] DreamOmni2: Multimodal Instruction-based Editing and Generation  [[PDF](https://arxiv.org/abs/2510.06679),[Page](https://github.com/dvlab-research/DreamOmni2)] ![Code](https://img.shields.io/github/stars/dvlab-research/DreamOmni2?style=social&label=Star)\n\n[arxiv 2025.10]  Ming-UniVision: Joint Image Understanding and Generation with a Unified Continuous Tokenizer [[PDF](https://arxiv.org/abs/2510.06590),[Page](https://github.com/inclusionAI/Ming-UniVision)] ![Code](https://img.shields.io/github/stars/inclusionAI/Ming-UniVision?style=social&label=Star) \n\n[arxiv 2025.10]  UniFusion: Vision-Language Model as Unified Encoder in Image Generation [[PDF](https://arxiv.org/abs/2510.12789),[Page](https://thekevinli.github.io/unifusion/)] \n\n[arxiv 2025.10]  Uniworld-V2: Reinforce Image Editing with Diffusion Negative-Aware Finetuning and MLLM Implicit Feedback [[PDF](https://arxiv.org/abs/2510.16888),[Page](https://github.com/PKU-YuanGroup/UniWorld-V2)] ![Code](https://img.shields.io/github/stars/PKU-YuanGroup/UniWorld-V2?style=social&label=Star)\n\n[arxiv 2025.11] iMontage: Unified, Versatile, Highly Dynamic Many-to-many Image Generation  [[PDF](https://arxiv.org/abs/2511.20635),[Page](https://github.com/Kr1sJFU/iMontage)] ![Code](https://img.shields.io/github/stars/Kr1sJFU/iMontage?style=social&label=Star)\n\n[arxiv 2025.12] ReasonEdit: Towards Reasoning-Enhanced Image Editing Models  [[PDF](https://arxiv.org/abs/2511.22625),[Page](https://github.com/stepfun-ai/Step1X-Edit)] ![Code](https://img.shields.io/github/stars/stepfun-ai/Step1X-Edit?style=social&label=Star)\n\n[arxiv 2025.12] ClusIR: Towards Cluster-Guided All-in-One Image Restoration  [[PDF](https://arxiv.org/abs/2512.10948)]\n\n[arxiv 2025.12]  DreamOmni3: Scribble-based Editing and Generation [[PDF](https://arxiv.org/abs/2512.22525),[Page](https://github.com/dvlab-research/DreamOmni3)] ![Code](https://img.shields.io/github/stars/dvlab-research/DreamOmni3?style=social&label=Star)\n\n[arxiv 2026.02]  CoLoGen: Progressive Learning of Concept–Localization Duality for Unified Image Generation [[PDF](https://arxiv.org/abs/2602.22150)]\n\n\n[arxiv 2026.03]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)\n\n\n## Generation and Understanding in a Unified Framework \n[arxiv 2024.11] Diff-2-in-1: Bridging Generation and Dense Perception with Diffusion Models  [[PDF](https://arxiv.org/abs/2411.05005),[Page]()]\n\n[arxiv 2024.11] Scaling Properties of Diffusion Models for Perceptual Tasks  [[PDF](https://arxiv.org/abs/2411.08034),[Page](https://scaling-diffusion-perception.github.io/)]\n\n[arxiv 2026.03]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)\n\n[arxiv 2026.03]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)\n\n\n\n## Architecture\n\n[arxiv 2024.03]Switch Diffusion Transformer: Synergizing Denoising Tasks with Sparse Mixture-of-Experts [[PDF](https://arxiv.org/abs/2403.09176),[Page](https://byeongjun-park.github.io/Switch-DiT/)]\n\n[arxiv 2024.05] TerDiT: Ternary Diffusion Models with Transformers [[PDF](https://arxiv.org/abs/2405.14854),[Page](https://github.com/Lucky-Lance/TerDiT)]\n\n[arxiv 2024.05] DiG: Scalable and Efficient Diffusion Models with Gated Linear Attention  [[PDF](https://arxiv.org/abs/2405.18428),[Page](https://github.com/hustvl/DiG)]\n\n[arxiv 2024.05]  ViG: Linear-complexity Visual Sequence Learning with Gated Linear Attention [[PDF](https://arxiv.org/abs/2405.18425),[Page](https://github.com/hustvl/ViG)]\n\n[arxiv 2024.06] Alleviating Distortion in Image Generation via Multi-Resolution Diffusion Models[[PDF](https://arxiv.org/abs/2406.09416),[Page](https://qihao067.github.io/projects/DiMR)]\n\n[arxiv 2024.07] UltraEdit: Instruction-based Fine-Grained Image Editing at Scale [[PDF](https://arxiv.org/abs/2407.05282),[Page](https://ultra-editing.github.io/)]\n\n[arxiv 2024.07] Add-SD: Rational Generation without Manual Reference  [[PDF](https://arxiv.org/abs/2407.21016),[Page](https://github.com/ylingfeng/Add-SD)]\n\n[arxiv 2024.07] Specify and Edit: Overcoming Ambiguity in Text-Based Image Editing  [[PDF](https://arxiv.org/abs/2407.20232),[Page](https://github.com/fabvio/SANE)]\n\n[arxiv 2024.08] FastEdit: Fast Text-Guided Single-Image Editing via Semantic-Aware Diffusion Fine-Tuning  [[PDF](https://arxiv.org/abs/2408.03355),[Page](https://fastedit-sd.github.io/)]\n\n[arxiv 2024.08] EasyInv: Toward Fast and Better DDIM Inversion [[PDF](https://arxiv.org/abs/2408.05159)]\n\n[arxiv 2024.08] AnyDesign: Versatile Area Fashion Editing via Mask-Free Diffusion [[PDF](https://arxiv.org/abs/2408.11553)]\n\n[arxiv 2024.10] On Inductive Biases That Enable Generalization of Diffusion Transformers  [[PDF](https://arxiv.org/abs/2410.21273),[Page](https://dit-generalization.github.io/)]\n\n[arxiv 2024.12] Causal Diffusion Transformer for Generative Modeling  [[PDF](https://arxiv.org/abs/2412.12095),[Page](https://github.com/causalfusion/causalfusion)] ![Code](https://img.shields.io/github/stars/causalfusion/causalfusion?style=social&label=Star)\n\n[arxiv 2024.12] E-CAR: Efficient Continuous Autoregressive Image Generation via Multistage Modeling  [[PDF](https://arxiv.org/abs/2412.14170)]\n\n[arxiv 2025.02]  Fractal Generative Models [[PDF](https://arxiv.org/pdf/2502.17437),[Page](https://github.com/LTH14/fractalgen)] ![Code](https://img.shields.io/github/stars/LTH14/fractalgen?style=social&label=Star)\n\n[arxiv 2025.03] Spiking Transformer: Introducing Accurate Addition-Only Spiking Self-Attention for Transformer  [[PDF](https://arxiv.org/pdf/2503.00226)]\n\n[arxiv 2025.03] DiT-Air: Revisiting the Efficiency of Diffusion Model Architecture Design in Text to Image Generation  [[PDF](https://arxiv.org/abs/2503.10618)]\n\n[arxiv 2025.06] M4V: Multi-Modal Mamba for Text-to-Video Generation  [[PDF](https://arxiv.org/abs/xxx),[Page](https://huangjch526.github.io/M4V_project/)] \n\n[arxiv 2025.12] Visual Generation Tuning  [[PDF](https://arxiv.org/pdf/2511.23469),[Page](https://github.com/hustvl/VGT)] ![Code](https://img.shields.io/github/stars/hustvl/VGT?style=social&label=Star)\n\n\n[arxiv 2026.03]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)\n\n\n\n## Distribution \n\n[arxiv 2024.10] Rectified Diffusion: Straightness Is Not Your Need  [[PDF](https://arxiv.org/abs/2410.07303),[Page](https://github.com/G-U-N/Rectified-Diffusion)]\n\n[arxiv 2024.10] Simple ReFlow: Improved Techniques for Fast Flow Models  [[PDF](https://arxiv.org/abs/2410.07815),[Page]()]\n\n[arxiv 2024.10] Consistency Diffusion Bridge Models  [[PDF](https://arxiv.org/abs/2410.22637)]\n\n[arxiv 2024.12]  Orthus: Autoregressive Interleaved Image-Text Generation with Modality-Specific Heads [[PDF](https://arxiv.org/pdf/2412.00127)]\n\n[arxiv 2024.12]  [MASK] is All You Need [[PDF](https://arxiv.org/abs/2412.06787),[Page](https://compvis.github.io/mask/)] ![Code](https://img.shields.io/github/stars/CompVis/mask?style=social&label=Star)\n\n[arxiv 2024.12] See Further When Clear: Curriculum Consistency Model  [[PDF](https://arxiv.org/abs/2412.06295)]\n\n[arxiv 2024.12]  Analyzing and Improving Model Collapse in Rectified Flow Models [[PDF](https://arxiv.org/abs/2412.08175)]\n\n[arxiv 2024.12] Multimodal Latent Language Modeling with Next-Token Diffusion  [[PDF](https://arxiv.org/abs/2412.08635),[Page](https://aka.ms/GeneralAI)] \n\n[arxiv 2024.12] Exploring Diffusion and Flow Matching Under Generator Matching  [[PDF](https://arxiv.org/abs/2412.11024)]\n\n[arxiv 2025.02] Variational Rectified Flow Matching  [[PDF](https://arxiv.org/pdf/2502.09616)]\n\n[arxiv 2025.02]  Designing a Conditional Prior Distribution for Flow-Based Generative Models [[PDF](https://arxiv.org/pdf/2502.09611)]\n\n[arxiv 2025.02]  Bidirectional Diffusion Bridge Models [[PDF](https://arxiv.org/abs/2502.09655),[Page](https://github.com/kvmduc/BDBM)] ![Code](https://img.shields.io/github/stars/kvmduc/BDBM?style=social&label=Star)\n\n[arxiv 2025.02] Is Noise Conditioning Necessary for Denoising Generative Models?  [[PDF](https://arxiv.org/pdf/2502.13129)]\n\n[arxiv 2025.03] The Curse of Conditions: Analyzing and Improving Optimal Transport for Conditional Flow-Based Generation  [[PDF](https://arxiv.org/abs/2503.10636),[Page](https://hkchengrex.github.io/C2OT)] ![Code](https://img.shields.io/github/stars/hkchengrex/C2OT?style=social&label=Star)\n\n[arxiv 2025.03] Deeply Supervised Flow-Based Models  [[PDF](https://arxiv.org/abs/2503.14494),[Page](https://deepflow-project.github.io/)]\n\n[arxiv 2025.04] PixelFlow: Pixel-Space Generative Models with Flow  [[PDF](https://arxiv.org/abs/2504.07963),[Page](https://github.com/ShoufaChen/PixelFlow)] ![Code](https://img.shields.io/github/stars/ShoufaChen/PixelFlow?style=social&label=Star)\n\n[arxiv 2025.06]  Contrastive Flow Matching [[PDF](https://arxiv.org/pdf/2506.05350),[Page](https://github.com/gstoica27/DeltaFM)] ![Code](https://img.shields.io/github/stars/gstoica27/DeltaFM?style=social&label=Star)\n\n[arxiv 2025.06] STARFlow: Scaling Latent Normalizing Flows for High-resolution Image Synthesis  [[PDF](https://arxiv.org/abs/2506.06276)]\n\n[arxiv 2025.06] Improving Progressive Generation with Decomposable Flow Matching  [[PDF](https://arxiv.org/abs/2506.19839),[Page](https://snap-research.github.io/dfm/)] \n\n[arxiv 2025.07] Pyramidal Patchification Flow for Visual Generation  [[PDF](https://arxiv.org/pdf/2506.23543),[Page](https://github.com/fudan-generative-vision/PPFlow)] ![Code](https://img.shields.io/github/stars/fudan-generative-vision/PPFlow?style=social&label=Star)\n\n[arxiv 2025.07]  FACM: Flow-Anchored Consistency Models [[PDF](https://arxiv.org/abs/2507.03738),[Page](https://github.com/ali-vilab/FACM)] ![Code](https://img.shields.io/github/stars/ali-vilab/FACM?style=social&label=Star)\n\n[arxiv 2025.07] FashionPose: Text to Pose to Relight Image Generation for Personalized Fashion Visualization  [[PDF](https://arxiv.org/pdf/2507.13311)]\n\n[arxiv 2025.08]  Next Visual Granularity Generation [[PDF](https://arxiv.org/abs/2508.12811),[Page](https://yikai-wang.github.io/nvg/)] ![Code](https://img.shields.io/github/stars/Yikai-Wang/nvg?style=social&label=Star)\n\n[arxiv 2025.09]  Delta Velocity Rectified Flow for Text-to-Image Editing [[PDF](https://arxiv.org/pdf/2509.05342),[Page](https://github.com/gaspardbd/DeltaVelocityRectifiedFlow)] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)\n\n[arxiv 2025.09] CAR-Flow: Condition-Aware Reparameterization Aligns Source and Target for Better Flow Matching  [[PDF](https://arxiv.org/abs/2509.19300)]\n\n[arxiv 2025.10]  Authentic Discrete Diffusion Model [[PDF](https://arxiv.org/abs/2510.01047)]\n\n[arxiv 2025.10]  Blockwise Flow Matching: Improving Flow Matching Models For Efficient High-Quality Generation [[PDF](https://arxiv.org/abs/2510.21167)]\n\n[arxiv 2025.12] StreamFlow: Theory, Algorithm, and Implementation for High-Efficiency Rectified Flow Generation  [[PDF](https://arxiv.org/abs/2511.22009),[Page](https://github.com/World-Snapshot/StreamFlow)] ![Code](https://img.shields.io/github/stars/World-Snapshot/StreamFlow?style=social&label=Star)\n\n[arxiv 2025.12] SimFlow: Simplified and End-to-End Training of Latent Normalizing Flows  [[PDF](https://arxiv.org/abs/2512.04084),[Page](https://qinyu-allen-zhao.github.io/SimFlow/)] \n\n[arxiv 2026.01]  FlowConsist: Make Your Flow Consistent with Real Trajectory [[PDF](https://arxiv.org/pdf/2602.06346)]\n\n[arxiv 2026.03] MPDiT: Multi-Patch Global-to-Local Transformer Architecture For Efficient Flow Matching and Diffusion Model  [[PDF](https://arxiv.org/abs/2603.26357)]\n\n[arxiv 2026.03]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)\n\n\n## CFG\n[arxiv 2025.01] Visual Generation Without Guidance  [[PDF](https://arxiv.org/abs/2501.15420),[Page](https://github.com/thu-ml/GFT)] ![Code](https://img.shields.io/github/stars/thu-ml/GFT?style=social&label=Star)\n\n[arxiv 2025.02] REG: Rectified Gradient Guidance for Conditional Diffusion Models  [[PDF](https://arxiv.org/pdf/2501.18865)]\n\n[arxiv 2025.02]  DICE: Distilling Classifier-Free Guidance into Text Embeddings [[PDF](https://arxiv.org/pdf/2502.03726)]\n\n[arxiv 2025.02] Variational Control for Guidance in Diffusion Models  [[PDF](https://arxiv.org/pdf/2502.03686)]\n\n[arxiv 2025.02] Diffusion Models without Classifier-free Guidance  [[PDF](https://arxiv.org/pdf/2502.12154),[Page](https://github.com/tzco/Diffusion-wo-CFG)] ![Code](https://img.shields.io/github/stars/tzco/Diffusion-wo-CFG?style=social&label=Star)\n\n[arxiv 2025.02] Classifier-free Guidance with Adaptive Scaling  [[PDF](https://arxiv.org/pdf/2502.10574)]\n\n[arxiv 2025.10]  Rectified-CFG++ for Flow Based Models [[PDF](https://arxiv.org/abs/2510.07631),[Page](https://rectified-cfgpp.github.io/)] ![Code](https://img.shields.io/github/stars/shreshthsaini/Rectified-CFGpp?style=social&label=Star)\n\n[arxiv 2026.03] CFG-Ctrl: Control-Based Classifier-Free Diffusion Guidance  [[PDF](https://arxiv.org/abs/2603.03281),[Page](https://hanyang-21.github.io/CFG-Ctrl/)] ![Code](https://img.shields.io/github/stars/hanyang-21/CFG-Ctrl?style=social&label=Star)\n\n[arxiv 2026.03]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)\n\n\n\n## ROPE\n\n[arxiv 2025.02] VideoRoPE: What Makes for Good Video Rotary Position Embedding?  [[PDF](https://arxiv.org/abs/2502.05173),[Page](https://github.com/Wiselnn570/VideoRoPE)] ![Code](https://img.shields.io/github/stars/Wiselnn570/VideoRoPE?style=social&label=Star)\n\n[arxiv 2026.03]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)\n\n# VLM guided Generation\n[arxiv 2025.06] Dual-Process Image Generation  [[PDF](https://arxiv.org/abs/2506.01955),[Page](https://dual-process.github.io/)] ![Code](https://img.shields.io/github/stars/https://github.com/g-luo/dual_process?style=social&label=Star)\n\n[arxiv 2026.03]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)\n\n\n## Chat for editing \n[arxiv 2024.12] ChatDiT: A Training-Free Baseline for Task-Agnostic Free-Form Chatting with Diffusion Transformers  [[PDF](https://arxiv.org/abs/2412.12571),[Page](https://github.com/ali-vilab/ChatDiT)] ![Code](https://img.shields.io/github/stars/ali-vilab/ChatDiT?style=social&label=Star)\n\n\n## Instruct for editing \n[arxiv 2024.07] GenArtist: Multimodal LLM as an Agent for Unified Image Generation and Editing  [[PDF](https://arxiv.org/abs/2407.05600),[Page](https://zhenyuw16.github.io/GenArtist_page/)]\n\n[arxiv 2024.07] UltraEdit: Instruction-based Fine-Grained Image Editing at Scale  [[PDF](https://arxiv.org/abs/2407.05282),[Page](https://ultra-editing.github.io/)]\n\n[arxiv 2024.11] SeedEdit: Align Image Re-Generation to Image Editing  [[PDF](https://arxiv.org/abs/2411.06686),[Page](https://team.doubao.com/en/special/seededit)]\n\n[arxiv 2024.11] Add-it: Training-Free Object Insertion in Images With Pretrained Diffusion Models [[PDF](https://arxiv.org/abs/2411.07232),[Page](https://research.nvidia.com/labs/par/addit/)]\n\n[arxiv 2024.11] OmniEdit: Building Image Editing Generalist Models Through Specialist Supervision [[PDF](https://arxiv.org/abs/2411.07199),[Page](https://tiger-ai-lab.github.io/OmniEdit/)]\n\n[arxiv 2024.11] InsightEdit: Towards Better Instruction Following for Image Editing  [[PDF](https://poppyxu.github.io/InsightEdit_web/),[Page](https://poppyxu.github.io/InsightEdit_web/)] ![Code](https://img.shields.io/github/stars/poppyxu/InsightEdit?style=social&label=Star)\n\n[arxiv 2024.12] HumanEdit : A High-Quality Human-Rewarded Dataset for Instruction-based Image Editing  [[PDF](https://arxiv.org/abs/2412.04280),[Page](https://viiika.github.io/HumanEdit/)] ![Code](https://img.shields.io/github/stars/viiika/HumanEdit/?style=social&label=Star)\n\n[arxiv 2024.12] FireFlow: Fast Inversion of Rectified Flow for Image Semantic Editing  [[PDF](https://arxiv.org/abs/2412.07517),[Page](https://github.com/HolmesShuan/FireFlow-Fast-Inversion-of-Rectified-Flow-for-Image-Semantic-Editing)] ![Code](https://img.shields.io/github/stars/HolmesShuan/FireFlow-Fast-Inversion-of-Rectified-Flow-for-Image-Semantic-Editing?style=social&label=Star)\n\n[arxiv 2024.12] FluxSpace: Disentangled Semantic Editing in Rectified Flow Transformers  [[PDF](https://arxiv.org/abs/2312.05390),[Page](https://fluxspace.github.io/)] \n\n[arxiv 2024.12] Instruction-based Image Manipulation by Watching How Things Move  [[PDF](https://arxiv.org/abs/2412.12087),[Page](https://ljzycmd.github.io/projects/InstructMove/)] \n\n[arxiv 2024.12] UIP2P: Unsupervised Instruction-based Image Editing via Cycle Edit Consistency  [[PDF](https://arxiv.org/abs/2412.15216),[Page](https://enis.dev/uip2p/)]\n\n[arxiv 2026.03]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)\n\n\n\n## Improve T2I base modules\n[arxiv 2023]LLM Blueprint: Enabling Text-to-Image Generation with Complex and Detailed Prompts [[PDF](https://arxiv.org/abs/2310.10640),[Page](https://github.com/hananshafi/llmblueprint)]\n\n[arxiv 2023.11]Self-correcting LLM-controlled Diffusion Models [[PDF](https://arxiv.org/abs/2311.16090)]\n\n[arxiv 2023.11]Enhancing Diffusion Models with Text-Encoder Reinforcement Learning [[PDF](https://arxiv.org/abs/2311.15657)]\n\n[arxiv 2023.11]Ranni: Taming Text-to-Image Diffusion for Accurate Instruction Following [[PDF](https://arxiv.org/abs/2311.17002)]\n\n[arxiv 2023.12]Unlocking Spatial Comprehension in Text-to-Image Diffusion Models [[PDF](https://arxiv.org/abs/2311.17937)]\n\n[arxiv 2023.12]Fair Text-to-Image Diffusion via Fair Mapping [[PDF](https://arxiv.org/abs/2311.17695)]\n\n[arxiv 2023.12]CONFORM: Contrast is All You Need For High-Fidelity Text-to-Image Diffusion Models[[PDF](https://arxiv.org/abs/2312.06059)]\n\n[arxiv 2023.120]DreamDistribution: Prompt Distribution Learning for Text-to-Image Diffusion Models[ [PDF](https://arxiv.org/abs/2312.14216),[Page](https://briannlongzhao.github.io/DreamDistribution)]\n\n[arxiv 2023.12]Prompt Expansion for Adaptive Text-to-Image Generation [[PDF](https://arxiv.org/abs/2312.16720)]\n\n[arxiv 2023.12]Diffusion Model with Perceptual Loss [[PDF](https://arxiv.org/abs/2401.00110)]\n\n[arxiv 2024.01]EmoGen: Emotional Image Content Generation with Text-to-Image Diffusion Models [[PDF](https://arxiv.org/abs/2401.04608)]\n\n[arxiv 2024.01]DiffusionGPT: LLM-Driven Text-to-Image Generation System [[PDF](https://arxiv.org/abs/2401.10061)]\n\n[arxiv 2024.01]Divide and Conquer: Language Models can Plan and Self-Correct for Compositional Text-to-Image Generation[[PDF](https://arxiv.org/abs/2401.15688)]\n\n[arxiv 2024.02]MIGC: Multi-Instance Generation Controller for Text-to-Image Synthesis [[PDF](https://arxiv.org/pdf/2402.05408.pdf),[Page](https://migcproject.github.io/)]\n\n[arxiv 2024.02]Get What You Want, Not What You Don't: Image Content Suppression for Text-to-Image Diffusion Models [[PDF](https://arxiv.org/abs/2402.05375),[Page](https://github.com/sen-mao/SuppressEOT)]\n\n[arxiv 2024.02]InstanceDiffusion: Instance-level Control for Image Generation [[PDF](https://arxiv.org/abs/2402.03290),[Page](https://people.eecs.berkeley.edu/~xdwang/projects/InstDiff/)]\n\n[arxiv 2024.02]Learning Continuous 3D Words for Text-to-Image Generation[[PDF](https://ttchengab.github.io/continuous_3d_words/c3d_words.pdf),[Page](https://ttchengab.github.io/continuous_3d_words/)]\n\n[arxiv 2024.02]Self-Play Fine-Tuning of Diffusion Models for Text-to-Image Generation[[PDF](https://arxiv.org/abs/2402.10210)]\n\n[arxiv 2024.02]RealCompo: Dynamic Equilibrium between Realism and Compositionality Improves Text-to-Image Diffusion Models [[PDF](https://arxiv.org/abs/2402.12908),[Page](https://github.com/YangLing0818/RealCompo)]\n\n[arxiv 2024.02]A User-Friendly Framework for Generating Model-Preferred Prompts in Text-to-Image Synthesis [[PDF](https://arxiv.org/abs/2402.12760)]\n\n[arxiv 2024.02]Contrastive Prompts Improve Disentanglement in Text-to-Image Diffusion Models [[PDF](https://arxiv.org/abs/2402.13490)]\n\n[arxiv 2024.02]Structure-Guided Adversarial Training of Diffusion Models[[PDF](https://arxiv.org/abs/2402.17563)]\n\n[arxiv 2024.03]SELMA: Learning and Merging Skill-Specific Text-to-Image Experts with Auto-Generated Data [[PDF](https://arxiv.org/abs/2403.06952),[Page](https://selma-t2i.github.io/)]\n\n[arxiv 2024.03]ELLA: Equip Diffusion Models with LLM for Enhanced Semantic Alignment [[PDF](https://arxiv.org/abs/2403.05135),[Page](https://ella-diffusion.github.io/)]\n\n[arxiv 2024.03]Bridging Different Language Models and Generative Vision Models for Text-to-Image Generation [[PDF](https://arxiv.org/abs/2403.07860),[Page](https://github.com/ShihaoZhaoZSH/LaVi-Bridge)]\n\n[arxiv 2024.03]Optimizing Negative Prompts for Enhanced Aesthetics and Fidelity in Text-To-Image Generation [[PDF](https://arxiv.org/abs/2403.07605)]\n\n[arxiv 2024.03]FouriScale: A Frequency Perspective on Training-Free High-Resolution Image Synthesis [[PDF](https://arxiv.org/abs/2403.12963)]\n\n[arxiv 2024.04]Getting it Right: Improving Spatial Consistency in Text-to-Image Models [[PDF](https://arxiv.org/abs/2404.01197),[Page](https://spright-t2i.github.io/)]\n\n[arxiv 2024.04]Dynamic Prompt Optimizing for Text-to-Image Generation [[PDF](https://arxiv.org/abs/2404.04095)]\n\n[arxiv 2024.04]Aligning Text-to-Image Diffusion Model with Image-to-Text Concept Matching [[PDF](https://arxiv.org/abs/2404.03653),[Page](https://caraj7.github.io/comat/)]\n\n[arxiv 2024.04]Align Your Steps: Optimizing Sampling Schedules in Diffusion Models [[PDF](https://arxiv.org/abs/2404.14507),[Page](https://research.nvidia.com/labs/toronto-ai/AlignYourSteps/)]\n\n[arxiv 2024.04]Stylus: Automatic Adapter Selection for Diffusion Models [[PDF](https://arxiv.org/abs/2404.18928),[Page](https://stylus-diffusion.github.io/)]\n\n[arxiv 2024.05]Deep Reward Supervisions for Tuning Text-to-Image Diffusion Models [[PDF](https://arxiv.org/abs/2405.00760)]\n\n[arxiv 2024.05]Simple Drop-in LoRA Conditioning on Attention Layers Will Improve Your Diffusion Model [[PDF](https://arxiv.org/abs/2405.03958)]\n\n[arxiv 2024.05]Attention-Driven Training-Free Efficiency Enhancement of Diffusion Models [[PDF](https://arxiv.org/abs/2405.05252)]\n\n[arxiv 2024.05]An Empirical Study and Analysis of Text-to-Image Generation Using Large Language Model-Powered Textual Representation [[PDF](https://arxiv.org/abs/2405.12914)]\n\n[arxiv 2024.05] Learning Multi-dimensional Human Preference for Text-to-Image Generation  [[PDF](https://arxiv.org/abs/2405.14705)]\n\n[arxiv 2024.05] Class-Conditional self-reward mechanism for improved Text-to-Image models  [[PDF](https://arxiv.org/abs/2405.13473)]\n\n[arxiv 2024.05]  LiteVAE: Lightweight and Efficient Variational Autoencoders for Latent Diffusion Models [[PDF](https://arxiv.org/abs/2405.14477)]\n\n[arxiv 2024.05] SG-Adapter: Enhancing Text-to-Image Generation with Scene Graph Guidance  [[PDF](https://arxiv.org/abs/2405.15321)]\n\n[arxiv 2024.05] Training-free Editioning of Text-to-Image Models [[PDF](https://arxiv.org/abs/2405.17069)]\n\n[arxiv 2024.05] PromptFix: You Prompt and We Fix the Photo [[PDF](https://arxiv.org/abs/2405.16785),[Page](https://github.com/yeates/PromptFix)]\n\n[arxiv 2024.06] Kaleido Diffusion: Improving Conditional Diffusion Models with Autoregressive Latent Modeling [[PDF](https://arxiv.org/abs/2405.21048)]\n\n[arxiv 2024.06]Improving GFlowNets for Text-to-Image Diffusion Alignment [[PDF](https://arxiv.org/abs/2406.00633)]\n\n[arxiv 2024.06] Diffusion Soup: Model Merging for Text-to-Image Diffusion Models [[PDF](https://arxiv.org/abs/2406.08431),[Page]()]\n\n[arxiv 2024.06] CFG++: Manifold-constrained Classifier Free Guidance for Diffusion Models[[PDF](https://arxiv.org/abs/2406.08070),[Page](https://github.com/CFGpp-diffusion/CFGpp)]\n\n[arxiv 2024.06]Understanding and Mitigating Compositional Issues in Text-to-Image Generative Models [[PDF](https://arxiv.org/abs/2406.07844),[Page](https://github.com/ArmanZarei/Mitigating-T2I-Comp-Issues)]\n\n[arxiv 2024.06] Make It Count: Text-to-Image Generation with an Accurate Number of Objects [[PDF](https://arxiv.org/abs/2406.10210),[Page](https://make-it-count-paper.github.io/)]\n\n[arxiv 2024.06]  AITTI: Learning Adaptive Inclusive Token for Text-to-Image Generation [[PDF](https://arxiv.org/abs/2406.12805),[Page](https://github.com/itsmag11/AITTI)]\n\n[arxiv 2024.06] Exploring the Role of Large Language Models in Prompt Encoding for Diffusion Models  [[PDF](https://arxiv.org/abs/2406.11831)]\n\n[arxiv 2024.06] Neural Residual Diffusion Models for Deep Scalable Vision Generation [[PDF](https://arxiv.org/abs/2406.13215)]\n\n[arxiv 2024.06] ARTIST: Improving the Generation of Text-rich Images by Disentanglement  [[PDF](https://arxiv.org/abs/2406.12044),[Page]()]\n\n[arxiv 2024.06] Not All Prompts Are Made Equal: Prompt-based Pruning of Text-to-Image Diffusion Models [[PDF](https://arxiv.org/abs/2406.12042)]\n\n[arxiv 2024.06]Fine-tuning Diffusion Models for Enhancing Face Quality in Text-to-image Generation[[PDF](https://arxiv.org/abs/2406.17100)]\n\n[arxiv 2024.07]PopAlign: Population-Level Alignment for Fair Text-to-Image Generation [[PDF](https://arxiv.org/abs/2406.19668)]\n\n[arxiv 2024.07]  LLM4GEN: Leveraging Semantic Representation of LLMs for Text-to-Image Generation  [[PDF](https://arxiv.org/pdf/2407.00737),[Page](https://xiaobul.github.io/LLM4GEN/)]\n\n[arxiv 2024.07]  Prompt Refinement with Image Pivot for Text-to-Image Generation [[PDF](https://arxiv.org/abs/2407.00247),[Page]()]\n\n[arxiv 2024.07] Improved Noise Schedule for Diffusion Training  [[PDF](https://arxiv.org/abs/2407.03297)]\n\n[arxiv 2024.07]  No Training, No Problem: Rethinking Classifier-Free Guidance for Diffusion Models [[PDF](https://arxiv.org/abs/2407.02687)]\n\n[arxiv 2024.07] Not All Noises Are Created Equally:Diffusion Noise Selection and Optimization  [[PDF](https://arxiv.org/abs/2407.14041)]\n\n[arxiv 2024.07] GeoGuide: Geometric guidance of diffusion models [[PDF](https://arxiv.org/abs/2407.12889)]\n\n[arxiv 2024.08]  Understanding the Local Geometry of Generative Model Manifolds [[PDF](https://arxiv.org/pdf/2408.08307)]\n\n[arxiv 2024.08] Iterative Object Count Optimization for Text-to-image Diffusion Models[[PDF](https://arxiv.org/abs/2408.11721)]\n\n[arxiv 2024.08]FRAP: Faithful and Realistic Text-to-Image Generation with Adaptive Prompt Weighting[[PDF](https://arxiv.org/abs/2408.11706)]\n\n[arxiv 2024.08] Compress Guidance in Conditional Diffusion Sampling  [[PDF](https://arxiv.org/abs/2408.11194)]\n\n[arxiv 2024.09]  Elucidating Optimal Reward-Diversity Tradeoffs in Text-to-Image Diffusion Models [[PDF](https://arxiv.org/abs/2409.06493)]\n\n[arxiv 2024.09] Generalizing Alignment Paradigm of Text-to-Image Generation with Preferences through f-divergence Minimization  [[PDF](https://arxiv.org/abs/2409.09774)]\n\n[arxiv 2024.09]  Pixel-Space Post-Training of Latent Diffusion Models [[PDF](https://arxiv.org/pdf/2409.17565),[Page]()]\n\n[arxiv 2024.09] Improvements to SDXL in NovelAI Diffusion V3 [[PDF](https://arxiv.org/abs/2409.15997)]\n\n[arxiv 2024.10] Removing Distributional Discrepancies in Captions Improves Image-Text Alignment [[PDF](https://github.com/adobe-research/llava-score),[Page](https://yuheng-li.github.io/LLaVA-score/)]\n\n[arxiv 2024.10] ComfyGen: Prompt-Adaptive Workflows for Text-to-Image Generation  [[PDF](https://arxiv.org/abs/2410.01731),[Page](https://comfygen-paper.github.io/)]\n\n[arxiv 2024.10] Representation Alignment for Generation: Training Diffusion Transformers Is Easier Than You Think  [[PDF](https://arxiv.org/abs/2410.06940)]\n\n[arxiv 2024.10] Decouple-Then-Merge: Towards Better Training for Diffusion Models  [[PDF](https://arxiv.org/abs/2410.06664)]\n\n[arxiv 2024.10] Sparse Repellency for Shielded Generation in Text-to-image Diffusion Models  [[PDF](https://arxiv.org/abs/2410.06025),[Page]()]\n\n\n[arxiv 2024.10] Training-free Diffusion Model Alignment with Sampling Demons  [[PDF](https://arxiv.org/abs/2410.05760)]\n\n[arxiv 2024.10]  Diffusion Models Need Visual Priors for Image Generation [[PDF](https://arxiv.org/abs/2410.08531)]\n\n[arxiv 2024.10] Improving Long-Text Alignment for Text-to-Image Diffusion Models  [[PDF](https://arxiv.org/abs/2410.11817),[Page](https://github.com/luping-liu/LongAlign)]\n\n[arxiv 2024.10] Dynamic Negative Guidance of Diffusion Models[[PDF](https://arxiv.org/abs/2410.14398)]\n\n[arxiv 2024.10] GraspDiffusion: Synthesizing Realistic Whole-body Hand-Object Interaction [[PDF](https://arxiv.org/abs/2410.13911),[Page]()]\n\n[arxiv 2024.10]  Progressive Compositionality In Text-to-Image Generative Models [[PDF](https://arxiv.org/abs/2410.16719),[Page](https://github.com/evansh666/EvoGen)]\n\n[arxiv 2024.11]  HandCraft: Anatomically Correct Restoration of Malformed Hands in Diffusion Generated Images [[PDF](https://arxiv.org/abs/2411.04332),[Page](https://kfzyqin.github.io/handcraft/)]\n\n[arxiv 2024.11] Improving image synthesis with diffusion-negative sampling  [[PDF](https://arxiv.org/abs/2411.05473)]\n\n[arxiv 2024.11]  Token Merging for Training-Free Semantic Binding in Text-to-Image Synthesis [[PDF](https://arxiv.org/abs/2411.07132),[Page](https://github.com/hutaiHang/ToMe)]\n\n[arxiv 2024.11] Noise Diffusion for Enhancing Semantic Faithfulness in Text-to-Image Synthesis  [[PDF](https://arxiv.org/abs/2411.16503),[Page](https://github.com/Bomingmiao/NoiseDiffusion)] ![Code](https://img.shields.io/github/stars/Bomingmiao/NoiseDiffusion?style=social&label=Star)\n\n[arxiv 2024.11] Text Embedding is Not All You Need: Attention Control for Text-to-Image Semantic Alignment with Text Self-Attention Maps  [[PDF](https://arxiv.org/abs/2411.15236)]\n\n[arxiv 2024.11] Relations, Negations, and Numbers: Looking for Logic in Generative Text-to-Image Models  [[PDF](https://arxiv.org/abs/2411.17066),[Page](https://github.com/ColinConwell/T2I-Probology)] ![Code](https://img.shields.io/github/stars/ColinConwell/T2I-Probology?style=social&label=Star)\n\n[arxiv 2024.11] Contrastive CFG: Improving CFG in Diffusion Models by Contrasting Positive and Negative Concepts  [[PDF](https://arxiv.org/abs/2411.17077)] \n\n[arxiv 2024.12] Self-Cross Diffusion Guidance for Text-to-Image Synthesis of Similar Subjects  [[PDF](https://arxiv.org/pdf/2411.18936) ]\n\n[arxiv 2024.12] Enhancing Compositional Text-to-Image Generation with Reliable Random Seeds  [[PDF](https://arxiv.org/abs/2411.18810)]\n\n[arxiv 2024.12]  Enhancing MMDiT-Based Text-to-Image Models for Similar Subject Generation [[PDF](https://arxiv.org/pdf/2411.18301),[Page](https://github.com/wtybest/EnMMDiT)] ![Code](https://img.shields.io/github/stars/wtybest/EnMMDiT?style=social&label=Star)\n\n[arxiv 2024.12] Addressing Attribute Leakages in Diffusion-based Image Editing without Training  [[PDF](https://arxiv.org/abs/2412.04715)]\n\n[arxiv 2024.12]  Learning Visual Generative Priors without Text [[PDF](https://arxiv.org/abs/2412.07767),[Page](https://xiaomabufei.github.io/lumos/)] ![Code](https://img.shields.io/github/stars/xiaomabufei/lumos?style=social&label=Star)\n\n[arxiv 2024.12] FiVA: Fine-grained Visual Attribute Dataset for Text-to-Image Diffusion Models  [[PDF](https://arxiv.org/abs/2412.07674),[Page](https://fiva-dataset.github.io/)] ![Code](https://img.shields.io/github/stars/wutong16/FiVA?style=social&label=Star)\n\n[arxiv 2024.12]  Fast Prompt Alignment for Text-to-Image Generation [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)\n\n[arxiv 2024.12] Context Canvas: Enhancing Text-to-Image Diffusion Models with Knowledge Graph-Based RAG  [[PDF](https://arxiv.org/abs/2412.09614),[Page](https://context-canvas.github.io/)] \n\n[arxiv 2024.12] CoMPaSS: Enhancing Spatial Understanding in Text-to-Image Diffusion Models  [[PDF](https://arxiv.org/abs/2412.13195),[Page](https://github.com/blurgyy/CoMPaSS)] ![Code](https://img.shields.io/github/stars/blurgyy/CoMPaSS?style=social&label=Star)\n\n[arxiv 2025.01] E2EDiff: Direct Mapping from Noise to Data for Enhanced Diffusion Models  [[PDF](https://arxiv.org/abs/2412.21044)] \n\n[arxiv 2025.01] Focus-N-Fix: Region-Aware Fine-Tuning for Text-to-Image Generation  [[PDF](https://arxiv.org/pdf/2501.06481)]\n\n[arxiv 2025.03] T2ICount: Enhancing Cross-modal Understanding for Zero-Shot Counting  [[PDF](https://arxiv.org/abs/2502.20625),[Page](https://github.com/cha15yq/T2ICount)] ![Code](https://img.shields.io/github/stars/cha15yq/T2ICount?style=social&label=Star)\n\n[arxiv 2025.03]  Investigating and Improving Counter-Stereotypical Action Relation in Text-to-Image Diffusion Models [[PDF](https://arxiv.org/pdf/2503.10037)]\n\n[arxiv 2025.04] ESPLoRA: Enhanced Spatial Precision with Low-Rank Adaption in Text-to-Image Diffusion Models for High-Definition Synthesis  [[PDF](https://arxiv.org/abs/2504.13745)]\n\n[arxiv 2025.05]  VSC: Visual Search Compositional Text-to-Image Diffusion Model [[PDF](https://arxiv.org/abs/2505.01104)]\n\n[arxiv 2025.05] DetailMaster: Can Your Text-to-Image Model Handle Long Prompts?  [[PDF](https://arxiv.org/abs/2505.16915)]\n\n[arxiv 2025.05]  Harnessing Caption Detailness for Data-Efficient Text-to-Image Generation[[PDF](https://arxiv.org/abs/2505.15172)]\n\n[arxiv 2025.06]  IMAGHarmony: Controllable Image Editing with Consistent Object Quantity and Layout [[PDF](https://arxiv.org/abs/2506.01949),[Page](https://github.com/muzishen/IMAGHarmony)] ![Code](https://img.shields.io/github/stars/muzishen/IMAGHarmony?style=social&label=Star)\n\n[arxiv 2025.06]  TACA: Rethinking Cross-Modal Interaction in Multimodal Diffusion Transformers [[PDF](https://arxiv.org/pdf/2506.07986),[Page](https://github.com/Vchitect/TACA)] ![Code](https://img.shields.io/github/stars/Vchitect/TACA?style=social&label=Star)\n\n[arxiv 2025.08] CountLoop:Iterative Agent Guided High Instance Image Generation  [[PDF](https://openreview.net/pdf?id=NZ0H1XtcZG),[Page](https://mondalanindya.github.io/CountLoop/)] ![Code](https://img.shields.io/github/stars/mondalanindya/CountLoop/?style=social&label=Star)\n\n[arxiv 2025.09]  Maestro: Self-Improving Text-to-Image Generation via Agent Orchestration [[PDF](https://arxiv.org/abs/2509.10704)]\n\n[arxiv 2025.09]  Understand Before You Generate: Self-Guided Training for Autoregressive Image Generation [[PDF](https://arxiv.org/abs/2509.15185)]\n\n[arxiv 2025.10]  Asynchronous Denoising Diffusion Models for Aligning Text-to-Image Generation [[PDF](https://arxiv.org/abs/2510.04504),[Page](https://github.com/hu-zijing/AsynDM)] ![Code](https://img.shields.io/github/stars/hu-zijing/AsynDM?style=social&label=Star)\n\n[arxiv 2025.10]  Head-wise Adaptive Rotary Positional Encoding for Fine-Grained Image Generation [[PDF](https://arxiv.org/abs/2510.10489)]\n\n[arxiv 2026.01] Agentic Retoucher for Text-To-Image Generation  [[PDF](https://arxiv.org/pdf/2601.02046)]\n\n[arxiv 2026.03] Early Failure Detection and Intervention in Video Diffusion Models  [[[PDF](https://arxiv.org/abs/2603.14320)]]\n\n[arxiv 2026.03] Semantic-Aware Prefix Learning for Token-Efficient Image Generation  [[PDF](https://arxiv.org/abs/2603.25249)]\n\n[arxiv 2026.03]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)\n\n\n\n## data augmentation\n[arxiv 2025.03] How far can we go with ImageNet for Text-to-Image generation?  [[PDF](https://arxiv.org/abs/2502.21318),[Page](https://lucasdegeorge.github.io/projects/t2i_imagenet/)] ![Code](https://img.shields.io/github/stars/lucasdegeorge/T2I-ImageNet?style=social&label=Star)\n\n[arxiv 2026.03]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)\n\n\n\n## VAE /  tokenizer\n\n[arxiv 2024.06] Scaling the Codebook Size of VQGAN to 100,000 with a Utilization Rate of 99%  [[PDF](https://arxiv.org/abs/2406.11837))]\n\n[arxiv 2024.11]  Adaptive Length Image Tokenization via Recurrent Allocation [[PDF](https://arxiv.org/abs/2411.02393),[Page](https://github.com/ShivamDuggal4/adaptive-length-tokenizer)]\n\n[arxiv 2025.01] CAT: Content-Adaptive Image Tokenization  [[PDF](https://arxiv.org/pdf/2501.03120)]\n\n[arxiv 2025.01] One-D-Piece: Image Tokenizer Meets Quality-Controllable Compression  [[PDF](https://arxiv.org/abs/2501.10064),[Page](https://turingmotors.github.io/one-d-piece-tokenizer)] \n\n[arxiv 2025.02] Diffusion Autoencoders are Scalable Image Tokenizers  [[PDF](https://arxiv.org/abs/2501.18593),[Page](https://yinboc.github.io/dito/)] ![Code](https://img.shields.io/github/stars/yinboc/dito?style=social&label=Star)\n\n[arxiv 2025.02]  Masked Autoencoders Are Effective Tokenizers for Diffusion Models [[PDF](https://arxiv.org/abs/2502.03444)]\n\n[arxiv 2025.02] QLIP: Text-Aligned Visual Tokenization Unifies Auto-Regressive Multimodal Understanding and Generation  [[PDF](https://arxiv.org/abs/2502.05178),[Page](https://nvlabs.github.io/QLIP/)] ![Code](https://img.shields.io/github/stars/NVlabs/QLIP?style=social&label=Star)\n\n[arxiv 2025.03] DLF: Extreme Image Compression with Dual-generative Latent Fusion  [[PDF](https://arxiv.org/pdf/2503.01428)]\n\n[arxiv 2025.03] FlowTok: Flowing Seamlessly Across Text and Image Tokens  [[PDF](https://arxiv.org/pdf/2503.10772),[Page](https://tacju.github.io/projects/flowtok.html)] ![Code](https://img.shields.io/github/stars/bytedance/1d-tokenizer?style=social&label=Star)\n\n[arxiv 2025.03] Tokenize Image as a Set  [[PDF](https://arxiv.org/pdf/2503.16425),[Page](https://github.com/Gengzigang/TokenSet)] ![Code](https://img.shields.io/github/stars/Gengzigang/TokenSet?style=social&label=Star)\n\n[arxiv 2025.04] GigaTok: Scaling Visual Tokenizers to 3 Billion Parameters for Autoregressive Image Generation  [[PDF](https://arxiv.org/abs/2504.08736),[Page](https://silentview.github.io/GigaTok/)] ![Code](https://img.shields.io/github/stars/SilentView/GigaTok?style=social&label=Star)\n\n[arxiv 2025.05]  TokBench: Evaluating Your Visual Tokenizer before Visual Generation [[PDF](https://arxiv.org/pdf/2505.18142),[Page](https://wjf5203.github.io/TokBench/)] ![Code](https://img.shields.io/github/stars/wjf5203/TokBench?style=social&label=Star)\n\n[arxiv 2025.06]  AliTok: Towards Sequence Modeling Alignment between Tokenizer and Autoregressive Model [[PDF](https://arxiv.org/pdf/2506.05289),[Page](https://github.com/ali-vilab/alitok)] ![Code](https://img.shields.io/github/stars/ali-vilab/alitok?style=social&label=Star)\n\n[arxiv 2025.06] Highly Compressed Tokenizer Can Generate Without Training  [[PDF](https://arxiv.org/html/2506.08257v1),[Page](https://github.com/lukaslaobeyer/token-opt)] ![Code](https://img.shields.io/github/stars/lukaslaobeyer/token-opt?style=social&label=Star)\n\n[arxiv 2025.06]  FlexTok: Resampling Images into 1D Token Sequences of Flexible Length [[PDF](https://arxiv.org/abs/2502.13967),[Page](https://github.com/apple/ml-flextok)] ![Code](https://img.shields.io/github/stars/apple/ml-flextok?style=social&label=Star)\n\n[arxiv 2025.07]  Holistic Tokenizer for Autoregressive Image Generation [[PDF](https://arxiv.org/pdf/2507.02358),[Page](https://github.com/CVMI-Lab/Hita)] ![Code](https://img.shields.io/github/stars/CVMI-Lab/Hita?style=social&label=Star)\n\n[arxiv 2025.07]  MGVQ: Could VQ-VAE Beat VAE? A Generalizable Tokenizer with Multi-group Quantization [[PDF](https://arxiv.org/abs/2507.07997),[Page](https://github.com/MKJia/MGVQ)] ![Code](https://img.shields.io/github/stars/MKJia/MGVQ?style=social&label=Star)\n\n[arxiv 2025.07] Quantize-then-Rectify: Efficient VQ-VAE Training  [[PDF](https://arxiv.org/abs/2507.10547)】\n\n[arxiv 2025.07] Latent Denoising Makes Good Visual Tokenizers  [[PDF](https://arxiv.org/abs/2507.15856),[Page](https://github.com/Jiawei-Yang/DeTok)] ![Code](https://img.shields.io/github/stars/Jiawei-Yang/DeTok?style=social&label=Star)\n\n[arxiv 2025.07]  DC-Gen: Accelerating Diffusion Models with Compressed Latent Space [[PDF](https://arxiv.org/abs/2508.00413),[Page](https://github.com/dc-ai-projects/DC-Gen)] ![Code](https://img.shields.io/github/stars/dc-ai-projects/DC-Gen?style=social&label=Star)\n\n[arxiv 2025.09]  Image Tokenizer Needs Post-Training [[PDF](https://arxiv.org/abs/2509.12474),[Page](https://qiuk2.github.io/works/RobusTok/index.html)] ![Code](https://img.shields.io/github/stars/qiuk2/RobusTok?style=social&label=Star)\n\n[arxiv 2025.10] SSDD: Single-Step Diffusion Decoder for Efficient Image Tokenization  [[PDF](https://arxiv.org/abs/2510.04961),[Page](https://github.com/facebookresearch/SSDD)] ![Code](https://img.shields.io/github/stars/facebookresearch/SSDD?style=social&label=Star)\n\n[arxiv 2025.12] Both Semantics and Reconstruction Matter: Making Representation Encoders Ready for Text-to-Image Generation and Editing  [[PDF](https://jshilong.github.io/PS-VAE-PAGE/),[Page](https://jshilong.github.io/PS-VAE-PAGE/)] \n\n[arxiv 2026.01] NativeTok: Native Visual Tokenization for Improved Image Generation  [[PDF](https://arxiv.org/abs/2601.22837),[Page](https://github.com/wangbei1/Nativetok)] ![Code](https://img.shields.io/github/stars/wangbei1/Nativetok?style=social&label=Star)\n\n[arxiv 2026.03] CaTok: Taming Mean Flows for One-Dimensional Causal Image Tokenization  [[PDF](https://arxiv.org/abs/2603.06449),[Page](https://sharelab-sii.github.io/catok-web/)] ![Code](https://img.shields.io/github/stars/ShareLab-SII/CaTok?style=social&label=Star)\n\n[arxiv 2026.03] RPiAE: A Representation-Pivoted Autoencoder Enhancing Both Image Generation and Editing  [[PDF](https://arxiv.org/abs/2603.19206),[Page](https://arthuring.github.io/RPiAE-page/)]\n\n[arxiv 2026.03] End-to-End Training for Unified Tokenization and Latent Denoising  [[PDF](https://arxiv.org/abs/2603.22283)]\n\n[arxiv 2026.03] DA-VAE: Plug-in Latent Compression for Diffusion via Detail Alignment  [[PDF](https://arxiv.org/abs/2603.22125),[Page](https://caixin98.github.io/davae/#)]\n\n\n[arxiv 2026.03]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)\n\n\n## autoregressive\n\n[arxiv 2024.11] Autoregressive Models in Vision: A Survey  [[PDF](https://arxiv.org/abs/2411.05902),[Page](https://github.com/ChaofanTao/Autoregressive-Models-in-Vision-Survey)]\n\n[arxiv 2024.10] ControlAR: Controllable Image Generation with Autoregressive Models  [[PDF](https://arxiv.org/abs/2410.02705),[Page](https://github.com/hustvl/ControlAR)]\n\n[arxiv 2024.10] LANTERN: Accelerating Visual Autoregressive Models with Relaxed Speculative Decoding  [[PDF](https://arxiv.org/abs/2410.03355)]\n\n[arxiv 2024.10] CAR: Controllable Autoregressive Modeling for Visual Generation  [[PDF](https://arxiv.org/abs/2410.04671),[Page](https://github.com/MiracleDance/CAR)]\n\n[arxiv 2024.10]  Stabilize the Latent Space for Image Autoregressive Modeling: A Unified Perspective [[PDF](https://arxiv.org/abs/2410.12490),[Page](https://github.com/DAMO-NLP-SG/DiGIT)]\n\n[arxiv 2024.10] LARP: Tokenizing Videos with a Learned Autoregressive Generative Prior  [[PDF](https://arxiv.org/abs/2410.21264),[Page](https://hywang66.github.io/larp/)]\n\n[arxiv 2024.11] Randomized Autoregressive Visual Generation  [[PDF](https://arxiv.org/abs/2411.00776),[Page](https://yucornetto.github.io/projects/rar.html)]\n\n[arxiv 2024.11] A Survey on Vision Autoregressive Model  [[PDF](https://arxiv.org/abs/2411.08666)]\n\n[arxiv 2024.11] M-VAR: Decoupled Scale-wise Autoregressive Modeling for High-Quality Image Generation  [[PDF](https://arxiv.org/abs/2411.10433),[Page](https://github.com/OliverRensu/MVAR)]\n\n[arxiv 2024.11] LaVin-DiT: Large Vision Diffusion Transformer  [[PDF](https://arxiv.org/abs/2411.11505)]\n\n[arxiv 2024.11] Scalable Autoregressive Monocular Depth Estimation  [[PDF](https://arxiv.org/abs/2411.11361)]\n\n[arxiv 2024.11] Continuous Speculative Decoding for Autoregressive Image Generation [[PDF](https://arxiv.org/abs/2411.11925),[Page](https://github.com/MarkXCloud/CSpD)]\n\n[arxiv 2024.11]  Sample- and Parameter-Efficient Auto-Regressive Image Models [[PDF](https://arxiv.org/abs/2411.15648),[Page](https://github.com/elad-amrani/xtra)] ![Code](https://img.shields.io/github/stars/elad-amrani/xtra?style=social&label=Star)\n\n[arxiv 2024.11] LiteVAR: Compressing Visual Autoregressive Modelling with Efficient Attention and Quantization  [[PDF](https://arxiv.org/abs/2411.17178)] \n\n[arxiv 2024.12]  CoDe: Collaborative Decoding Makes Visual Auto-Regressive Modeling Efficient [[PDF](https://arxiv.org/abs/2411.17787),[Page](https://czg1225.github.io/CoDe_page/)] ![Code](https://img.shields.io/github/stars/czg1225/CoDe?style=social&label=Star)\n\n[arxiv 2024.12] RandAR: Decoder-only Autoregressive Visual Generation in Random Orders  [[PDF](https://arxiv.org/abs/2412.01827),[Page](https://rand-ar.github.io/)] ![Code](https://img.shields.io/github/stars/ziqipang/RandAR?style=social&label=Star)\n\n[arxiv 2024.12] Switti: Designing Scale-Wise Transformers for Text-to-Image Synthesis  [[PDF](https://arxiv.org/pdf/2412.01819),[Page](https://yandex-research.github.io/switti/)] ![Code](https://img.shields.io/github/stars/yandex-research/switti?style=social&label=Star)\n\n[arxiv 2024.12] Taming Scalable Visual Tokenizer for Autoregressive Image Generation  [[PDF](https://arxiv.org/abs/2412.02692),[Page](TencentARC/SEED-Voken)] \n\n[arxiv 2024.12] XQ-GAN: An Open-source Image Tokenization Framework for Autoregressive Generation  [[PDF](https://arxiv.org/abs/2412.01762),[Page](https://github.com/lxa9867/ImageFolder)] ![Code](https://img.shields.io/github/stars/lxa9867/ImageFolder?style=social&label=Star)\n\n[arxiv 2024.12]  TinyFusion: Diffusion Transformers Learned Shallow [[PDF](https://arxiv.org/abs/2412.01199),[Page](https://github.com/VainF/TinyFusion)] ![Code](https://img.shields.io/github/stars/VainF/TinyFusion?style=social&label=Star)\n\n[arxiv 2024.12] Infinity ∞: Scaling Bitwise AutoRegressive Modeling for High-Resolution Image Synthesis  [[PDF](https://arxiv.org/abs/2412.04431),[Page](https://github.com/FoundationVision/Infinity)] ![Code](https://img.shields.io/github/stars/FoundationVision/Infinity?style=social&label=Star)\n\n[arxiv 2024.12] ZipAR: Accelerating Autoregressive Image Generation through Spatial Locality  [[PDF](https://arxiv.org/abs/2412.04062)]\n\n[arxiv 2024.12] ACDiT: Interpolating Autoregressive Conditional Modeling and Diffusion Transformer  [[PDF](https://arxiv.org/abs/2412.07720)]\n\n[arxiv 2024.12]  FlowAR: Scale-wise Autoregressive Image Generation Meets Flow Matching [[PDF](https://arxiv.org/abs/2412.15205),[Page](https://github.com/OliverRensu/FlowAR)] ![Code](https://img.shields.io/github/stars/OliverRensu/FlowAR?style=social&label=Star)\n\n[arxiv 2024.12]  Parallelized Autoregressive Visual Generation [[PDF](https://epiphqny.github.io/PAR-project/#),[Page](https://epiphqny.github.io/PAR-project/)] ![Code](https://img.shields.io/github/stars/Epiphqny/PAR?style=social&label=Star)\n\n[arxiv 2024.12] RDPM: Solve Diffusion Probabilistic Models via Recurrent Token Prediction  [[PDF](https://arxiv.org/pdf/2412.18390)]\n\n[arxiv 2025.02] Beyond Next-Token: Next-X Prediction for Autoregressive Visual Generation  [[PDF](https://arxiv.org/abs/2502.20388),[Page](https://oliverrensu.github.io/project/xAR/)] ![Code](https://img.shields.io/github/stars/OliverRensu/xAR?style=social&label=Star)\n\n[arxiv 2025.03]  NFIG: Autoregressive Image Generation with Next-Frequency Prediction [[PDF](https://arxiv.org/abs/2503.07076)]\n\n[arxiv 2025.03] Autoregressive Image Generation with Randomized Parallel Decoding  [[PDF](https://arxiv.org/abs/2503.10568),[Page](https://github.com/hp-l33/ARPG)] ![Code](https://img.shields.io/github/stars/hp-l33/ARPG?style=social&label=Star)\n\n[arxiv 2025.03]  Direction-Aware Diagonal Autoregressive Image Generation [[PDF](https://arxiv.org/pdf/2503.11129)]\n\n[arxiv 2025.03] TokenBridge: Bridging Continuous and Discrete Tokens for Autoregressive Visual Generation  [[PDF](https://arxiv.org/abs/2503.16430),[Page](https://yuqingwang1029.github.io/TokenBridge/)] ![Code](https://img.shields.io/github/stars/yuqingwang1029/TokenBridge?style=social&label=Star)\n\n[arxiv 2025.03]  Improving Autoregressive Image Generation through Coarse-to-Fine Token Prediction [[PDF](https://arxiv.org/abs/2503.16194)]\n\n[arxiv 2025.04]  FastVAR: Linear Visual Autoregressive Modeling via Cached Token Pruning [[PDF](https://arxiv.org/abs/2503.23367)]\n\n[arxiv 2025.04] Fine-Tuning Visual Autoregressive Models for Subject-Driven Generation  [[PDF](https://arxiv.org/abs/2504.02612)]\n\n[arxiv 2025.04]  SimpleAR: Pushing the Frontier of Autoregressive Visual Generation through Pretraining, SFT, and RL [[PDF](https://arxiv.org/abs/2504.11455),[Page](https://github.com/wdrink/SimpleAR)] ![Code](https://img.shields.io/github/stars/wdrink/SimpleAR?style=social&label=Star)\n\n[arxiv 2025.04] Head-Aware KV Cache Compression for Efficient Visual Autoregressive Modeling  [[PDF](https://arxiv.org/abs/2504.09261)]\n\n[arxiv 2025.04] Token-Shuffle: Towards High-Resolution Image Generation with Autoregressive Models  [[PDF](https://arxiv.org/abs/2504.17789)]\n\n[arxiv 2025.05]  TensorAR: Refinement is All You Need in Autoregressive Image Generation [[PDF](https://arxiv.org/pdf/2505.16324)]\n\n[arxiv 2025.06] HMAR: Efficient Hierarchical Masked Auto-Regressive Image Generation  [[PDF](https://arxiv.org/abs/2506.04421),[Page](https://research.nvidia.com/labs/dir/hmar/)] \n\n[arxiv 2025.06] SpectralAR: Spectral Autoregressive Visual Generation  [[PDF](https://arxiv.org/abs/2506.10962),[Page](https://huang-yh.github.io/spectralar/)] ![Code](https://img.shields.io/github/stars/huang-yh/SpectralAR?style=social&label=Star)\n\n[arxiv 2025.08]  NextStep-1: Toward Autoregressive Image Generation with Continuous Tokens at Scale [[PDF](https://arxiv.org/abs/2508.10711),[Page](https://github.com/stepfun-ai/NextStep-1)] ![Code](https://img.shields.io/github/stars/stepfun-ai/NextStep-1?style=social&label=Star)\n\n[arxiv 2025.10] Go with Your Gut: Scaling Confidence for Autoregressive Image Generation  [[PDF](https://arxiv.org/abs/2509.26376),[Page](https://github.com/EnVision-Research/ScalingAR)] ![Code](https://img.shields.io/github/stars/EnVision-Research/ScalingAR?style=social&label=Star)\n\n[arxiv 2025.10] FARMER: Flow AutoRegressive Transformer over Pixels  [[PDF](https://arxiv.org/abs/2510.23588)]\n\n[arxiv 2025.11] Diversity Has Always Been There in Your Visual Autoregressive Models  [[PDF](https://arxiv.org/abs/2511.17074),[Page](https://github.com/wangtong627/DiverseVAR)] ![Code](https://img.shields.io/github/stars/wangtong627/DiverseVAR?style=social&label=Star)\n\n[arxiv 2025.12] Progress by Pieces: Test-Time Scaling for Autoregressive Image Generation  [[PDF](https://arxiv.org/abs/2511.21185),[Page](https://grid-ar.github.io/)] \n\n[arxiv 2026.02] Autoregressive Image Generation with Masked Bit Modeling  [[PDF](https://bar-gen.github.io/),[Page](https://arxiv.org/abs/2602.09024)] ![Code](https://img.shields.io/github/stars/amazon-far/BAR?style=social&label=Star)\n\n[arxiv 2026.02]  BitDance: Scaling Autoregressive Generative Models with Binary Tokens [[PDF](https://arxiv.org/abs/2602.14041),[Page](https://bitdance.csuhan.com/)] ![Code](https://img.shields.io/github/stars/shallowdream204/BitDance?style=social&label=Star)\n\n\n[arxiv 2026.03]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)\n\n## autoregressive improvement\n[arxiv 2025.10] REAR: Rethinking Visual Autoregressive Models via Generator-Tokenizer Consistency Regularization  [[PDF](https://arxiv.org/abs/2510.04450)]\n\n[arxiv 2025.12]  DiverseVAR: Balancing Diversity and Quality of Next-Scale Visual Autoregressive Models [[PDF](https://arxiv.org/pdf/2511.21415)]\n\n\n[arxiv 2026.03]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)\n\n\n## autoregressive Editing \n[arxiv 2025.04] Training-Free Text-Guided Image Editing with Visual Autoregressive Model  [[PDF](https://arxiv.org/abs/2503.23897),[Page](https://github.com/wyf0912/AREdit)] ![Code](https://img.shields.io/github/stars/wyf0912/AREdit?style=social&label=Star)\n\n[arxiv 2025.04] Anchor Token Matching: Implicit Structure Locking for Training-free AR Image Editing  [[PDF](https://arxiv.org/abs/2504.10434),[Page](https://github.com/hutaiHang/ATM)] ![Code](https://img.shields.io/github/stars/hutaiHang/ATM?style=social&label=Star)\n\n[arxiv 2025.05] Context-Aware Autoregressive Models for Multi-Conditional Image Generation  [[PDF](https://arxiv.org/abs/2505.12274)]\n\n[arxiv 2025.08]  Visual Autoregressive Modeling for Instruction-Guided Image Editing [[PDF](https://arxiv.org/abs/2508.15772),[Page](https://github.com/HiDream-ai/VAREdit)] ![Code](https://img.shields.io/github/stars/HiDream-ai/VAREdit?style=social&label=Star)\n\n[arxiv 2025.09]  Discrete Noise Inversion for Next-scale Autoregressive Text-based Image Editing [[PDF](https://arxiv.org/abs/2509.01984)]\n\n[arxiv 2026.03]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)\n\n\n## autoregressive concept\n[arxiv 2025.04] Personalized Text-to-Image Generation with Auto-Regressive Models  [[PDF](https://arxiv.org/abs/2504.13162),[Page](https://github.com/KaiyueSun98/T2I-Personalization-with-AR)] ![Code](https://img.shields.io/github/stars/KaiyueSun98/T2I-Personalization-with-AR?style=social&label=Star)\n\n[arxiv 2025.06]  CoAR: Concept Injection into Autoregressive Models for Personalized Text-to-Image Generation [[PDF](https://arxiv.org/pdf/2508.07341),[Page](https://github.com/KZFkzf/CoAR)] ![Code](https://img.shields.io/github/stars/KZFkzf/CoAR?style=social&label=Star)\n\n[arxiv 2025.10]  EchoGen: Generating Visual Echoes in Any Scene via Feed-Forward Subject-Driven Auto-Regressive Model [[PDF](https://arxiv.org/abs/2509.26127)]\n\n[arxiv 2025.10] TokenAR: Multiple Subject Generation via Autoregressive Token-level enhancement  [[PDF](https://arxiv.org/abs/2510.16332),[Page](https://github.com/lyrig/TokenAR)] ![Code](https://img.shields.io/github/stars/lyrig/TokenAR?style=social&label=Star)\n\n[arxiv 2026.01]  DreamVAR: Taming Reinforced Visual Autoregressive Model for High-Fidelity Subject-Driven Image Generation [[PDF](https://arxiv.org/abs/2601.22507)]\n\n\n[arxiv 2026.03]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)\n\n\n\n## autoregressive speed\n[arxiv 2025.04]  Fast Autoregressive Models for Continuous Latent Generation [[PDF](https://arxiv.org/pdf/2504.18391)]\n\n[arxiv 2025.06] SkipVAR: Accelerating Visual Autoregressive Modeling via Adaptive Frequency-Aware Skipping  [[PDF](https://arxiv.org/abs/2506.08908),[Page](https://github.com/fakerone-li/SkipVAR)] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)\n\n[arxiv 2025.07] Locality-aware Parallel Decoding for Efficient Autoregressive Image Generation  [[PDF](https://arxiv.org/abs/2507.01957),[Page](https://github.com/mit-han-lab/lpd)] ![Code](https://img.shields.io/github/stars/mit-han-lab/lpd?style=social&label=Star)\n\n[arxiv 2026.03]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)\n\n## autoregressive continuous\n[arxiv 2025.05]  Continuous Visual Autoregressive Generation via Score Maximization [[PDF](https://arxiv.org/pdf/2505.07812)]\n\n\n[arxiv 2026.03]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)\n\n\n## autoregressive apps\n[arxiv 2025.07]  A Training-Free Style-Personalization via Scale-wise Autoregressive Model [[PDF](https://arxiv.org/pdf/2507.04482),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)\n\n[arxiv 2025.07]  CSD-VAR: Content-Style Decomposition in Visual Autoregressive Models [[PDF](https://arxiv.org/pdf/2507.13984)]\n\n\n[arxiv 2026.03]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)\n\n## autoregressive cot\n[arxiv 2025.10] Improving Chain-of-Thought Efficiency for Autoregressive Image Generation  [[PDF](https://arxiv.org/abs/2510.05593)]\n\n[arxiv 2026.03]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)\n\n\n\n## autoregressive feedback\n[arxiv 2025.08] AR-GRPO: Training Autoregressive Image Generation Models via Reinforcement Learning [[PDF](https://arxiv.org/pdf/2508.06924),[Page](https://github.com/Kwai-Klear/AR-GRPO)] ![Code](https://img.shields.io/github/stars/Kwai-Klear/AR-GRPO?style=social&label=Star)\n\n[arxiv 2026.03]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)\n\n\n\n## Distill Diffusion Model \n[arxiv 2024.05]Distilling Diffusion Models into Conditional GANs [[PDF](https://arxiv.org/abs/2405.05967),[Page](https://mingukkang.github.io/Diffusion2GAN/)]\n\n[arxiv 2024.06] Plug-and-Play Diffusion Distillation [[PDF](https://arxiv.org/abs/2406.01954)]\n\n[arxiv 2024.10] EvolveDirector: Approaching Advanced Text-to-Image Generation with Large Vision-Language Models  [[PDF](https://arxiv.org/abs/2410.07133)]\n\n[arxiv 2024.10]  DDIL: Improved Diffusion Distillation With Imitation Learning[[PDF](https://arxiv.org/abs/2410.11971)]\n\n\n[arxiv 2025.03] Scale-wise Distillation of Diffusion Models  [[PDF](https://arxiv.org/abs/2503.16397),[Page](https://yandex-research.github.io/swd/)] ![Code](https://img.shields.io/github/stars/yandex-research/swd?style=social&label=Star)\n\n[arxiv 2025.04]  Autoregressive Distillation of Diffusion Transformers [[PDF](https://arxiv.org/abs/2504.11295),[Page](https://github.com/alsdudrla10/ARD)] ![Code](https://img.shields.io/github/stars/alsdudrla10/ARD?style=social&label=Star)\n\n[arxiv 2025.08] Echo-4o: Harnessing the Power of GPT-4o Synthetic Images for Improved Image Generation  [[PDF](https://arxiv.org/abs/2508.09987),[Page](https://github.com/yejy53/Echo-4o)] ![Code](https://img.shields.io/github/stars/yejy53/Echo-4o?style=social&label=Star)\n\n\n[arxiv 2026.03]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)\n\n\n\n## Try-on \n[arxiv 2024.03]Time-Efficient and Identity-Consistent Virtual Try-On Using A Variant of Altered Diffusion Models [[PDF](https://arxiv.org/abs/2403.07371)]\n\n[arxiv 2024.03]Wear-Any-Way: Manipulable Virtual Try-on via Sparse Correspondence Alignment [[PDF](https://arxiv.org/abs/2403.12965),[Page](https://mengtingchen.github.io/wear-any-way-page/)]\n\n[arxiv 2024.04]Texture-Preserving Diffusion Models for High-Fidelity Virtual Try-On [[PDF](https://arxiv.org/abs/2404.01089)]\n\n[arxiv 2024.04]TryOn-Adapter: Efficient Fine-Grained Clothing Identity Adaptation for High-Fidelity Virtual Try-On [[PDF](https://arxiv.org/abs/2404.00878),[Page](https://github.com/jiazheng-xing/TryOn-Adapter)]\n\n[arxiv 2024.04]FLDM-VTON: Faithful Latent Diffusion Model for Virtual Try-on [[PDF](https://arxiv.org/abs/2404.14162)]\n\n[arxiv 2024.03]Improving Diffusion Models for Authentic Virtual Try-on in the Wild [[PDF](https://arxiv.org/abs/2403.05139),[Page](https://idm-vton.github.io/)]\n\n[arxiv 2024.04]MV-VTON: Multi-View Virtual Try-On with Diffusion Models [[PDF](https://arxiv.org/abs/2404.17364)]\n\n[arxiv 2024.05]AnyFit: Controllable Virtual Try-on for Any Combination of Attire Across Any Scenario [[PDF](https://arxiv.org/abs/2405.18172),[Page](https://colorful-liyu.github.io/anyfit-page/)]\n\n[arxiv 2024.06]  GraVITON: Graph based garment warping with attention guided inversion for Virtual-tryon [[PDF](https://arxiv.org/abs/2406.02184)]\n\n[arxiv 2024.06]M&M VTO: Multi-Garment Virtual Try-On and Editing[[PDF](https://arxiv.org/abs/2406.04542),[Page](https://mmvto.github.io/)]\n\n[arxiv 2024.06]Self-Supervised Vision Transformer for Enhanced Virtual Clothes Try-On [[PDF](https://arxiv.org/abs/2406.10539)]\n\n[arxiv 2024.06] MaX4Zero: Masked Extended Attention for Zero-Shot Virtual Try-On In The Wild  [[PDF](https://nadavorzech.github.io/max4zero.github.io/),[Page](https://nadavorzech.github.io/max4zero.github.io/)]\n\n[arxiv 2024.07]  D4-VTON: Dynamic Semantics Disentangling for Differential Diffusion based Virtual Try-On [[PDF](https://arxiv.org/abs/2407.15111),[Page](https://github.com/Jerome-Young/D4-VTON)]\n\n[arxiv 2024.07] DreamVTON: Customizing 3D Virtual Try-on with Personalized Diffusion Models  [[PDF](https://arxiv.org/abs/2407.16511)]\n\n[arxiv 2024.07]OutfitAnyone: Ultra-high Quality Virtual Try-On for Any Clothing and Any Person[[PDF](https://arxiv.org/abs/2407.16224),[Page](https://humanaigc.github.io/outfit-anyone/)]\n\n[arxiv 2024.07] CatVTON: Concatenation Is All You Need for Virtual Try-On with Diffusion Models  [[PDF](https://arxiv.org/abs/2407.15886),[Page](https://github.com/Zheng-Chong/CatVTON)]\n\n[arxiv 2024.08] BooW-VTON: Boosting In-the-Wild Virtual Try-On via Mask-Free Pseudo Data Training  [[PDF](https://arxiv.org/abs/2408.06047),[Page](https://github.com/little-misfit/BooW-VTON)]\n\n[arxiv 2024.09] Improving Virtual Try-On with Garment-focused Diffusion Models  [[PDF](https://arxiv.org/abs/2409.08258),[Page](https://github.com/siqi0905/GarDiff/tree/master)]\n\n[arxiv 2024.09] AnyLogo: Symbiotic Subject-Driven Diffusion System with Gemini Status  [[PDF](https://arxiv.org/abs/2409.17740)]\n\n[arxiv 2024.10] GS-VTON: Controllable 3D Virtual Try-on with Gaussian Splatting[[PDF](https://arxiv.org/abs/2410.05259),[Page](https://yukangcao.github.io/GS-VTON/)]\n\n[arxiv 2024.11]  Try-On-Adapter: A Simple and Flexible Try-On Paradigm [[PDF](https://arxiv.org/abs/2411.10187)]\n\n[arxiv 2024.11]  FitDiT: Advancing the Authentic Garment Details for High-fidelity Virtual Try-on [[PDF](https://arxiv.org/abs/2411.10499),[Page](https://byjiang.com/FitDiT/)]\n\n[arxiv 2024.11] TED-VITON: Transformer-Empowered Diffusion Models for Virtual Try-On  [[PDF](https://arxiv.org/abs/2411.17017)]\n\n[arxiv 2024.12]  TryOffDiff: Virtual-Try-Off via High-Fidelity Garment Reconstruction using Diffusion Models [[PDF](https://arxiv.org/abs/2411.18350),[Page](https://rizavelioglu.github.io/tryoffdiff/)] \n\n[arxiv 2024.12]  AnyDressing: Customizable Multi-Garment Virtual Dressing via Latent Diffusion Models [[PDF](https://arxiv.org/abs/2412.04146),[Page](https://crayon-shinchan.github.io/AnyDressing/)] ![Code](https://img.shields.io/github/stars/Crayon-Shinchan/AnyDressing?style=social&label=Star)\n\n[arxiv 2024.12] PEMF-VVTO: Point-Enhanced Video Virtual Try-on via Mask-free Paradigm  [[PDF](https://arxiv.org/pdf/2412.03021)]\n\n[arxiv 2024.12]  Leffa: Learning Flow Fields in Attention for Controllable Person Image Generation [[PDF](https://arxiv.org/abs/2412.08486),[Page](https://github.com/franciszzj/Leffa)] ![Code](https://img.shields.io/github/stars/franciszzj/Leffa?style=social&label=Star)\n\n[arxiv 2024.12]  SwiftTry: Fast and Consistent Video Virtual Try-On with Diffusion Models [[PDF](https://arxiv.org/abs/2412.10178)]\n\n[arxiv 2024.12] Dynamic Try-On: Taming Video Virtual Try-on with Dynamic Attention Mechanism  [[PDF](https://arxiv.org/abs/2412.09822),[Page](https://zhengjun-ai.github.io/dynamic-tryon-page/)] \n\n[arxiv 2024.12] Learning Implicit Features with Flow Infused Attention for Realistic Virtual Try-On  [[PDF](https://arxiv.org/abs/2412.11435)]\n\n[arxiv 2024.12] FashionComposer : Compositional Fashion Image Generation  [[PDF](https://arxiv.org/abs/2412.14168),[Page](https://sihuiji.github.io/FashionComposer-Page/)] ![Code](https://img.shields.io/github/stars/SihuiJi/FashionComposer?style=social&label=Star)\n\n[arxiv 2024.12] DiffusionTrend: A Minimalist Approach to Virtual Fashion Try-On  [[PDF](https://arxiv.org/abs/2412.14465)]\n\n[arxiv 2024.12] PromptDresser: Improving the Quality and Controllability of Virtual Try-On via Generative Textual Prompt and Prompt-aware Mask  [[PDF](https://arxiv.org/abs/2412.16978),[Page](https://github.com/rlawjdghek/PromptDresser)] ![Code](https://img.shields.io/github/stars/rlawjdghek/PromptDresser?style=social&label=Star)\n\n[arxiv 2024.12] Fashionability-Enhancing Outfit Image Editing with Conditional Diffusion Model  [[PDF](https://arxiv.org/pdf/2412.18421)]\n\n[arxiv 2025.01] MC-VTON: Minimal Control Virtual Try-On Diffusion Transformer [[PDF](https://arxiv.org/abs/2501.03630)]\n\n[arxiv 2025.01] Enhancing Virtual Try-On with Synthetic Pairs and Error-Aware Noise Scheduling  [[PDF](https://arxiv.org/abs/2501.04666)]\n\n[arxiv 2025.01]  1-2-1: Renaissance of Single-Network Paradigm for Virtual Try-On [[PDF](https://arxiv.org/abs/2501.05369),[Page](https://ningshuliang.github.io/2023/Arxiv/index.html)] ![Code](https://img.shields.io/github/stars/ningshuliang/1-2-1-MNVTON?style=social&label=Star)\n\n[arxiv 2025.02] MFP-VTON: Enhancing Mask-Free Person-to-Person Virtual Try-On via Diffusion Transformer  [[PDF](https://arxiv.org/pdf/2502.01626)]\n\n[arxiv 2025.02] TRUEPOSE: Human-Parsing-guided Attention Diffusion for Full-ID Preserving Pose Transfer  [[PDF](https://arxiv.org/pdf/2502.03426)]\n\n[arxiv 2025.02] CrossVTON: Mimicking the Logic Reasoning on Cross-category Virtual Try-on guided by Tri-zone Priors  [[PDF](https://arxiv.org/pdf/2502.14373)]\n\n[arxiv 2025.03] MF-VITON: High-Fidelity Mask-Free Virtual Try-On with Minimal Input  [[PDF](https://arxiv.org/pdf/2503.08650),[Page](https://zhenchenwan.github.io/MF-VITON/)] ![Code](https://img.shields.io/github/stars/ZhenchenWan/MF-VITON-High-Fidelity-Mask-Free-Virtual-Try-On-with-Minimal-Input?style=social&label=Star)\n\n[arxiv 2025.03] Shining Yourself: High-Fidelity Ornaments Virtual Try-on with Diffusion Model  [[PDF](https://arxiv.org/pdf/2503.16065),[Page](https://shiningyourself.github.io/)] \n\n[arxiv 2025.03] Multi-focal Conditioned Latent Diffusion for Person Image Synthesis  [[PDF](https://arxiv.org/pdf/2503.15686),[Page](https://github.com/jqliu09/mcld)] ![Code](https://img.shields.io/github/stars/jqliu09/mcld?style=social&label=Star)\n\n[arxiv 2025.04] TryOffDiff: Virtual-Try-Off via High-Fidelity Garment Reconstruction using Diffusion Models  [[PDF](https://arxiv.org/abs/2411.18350),[Page](https://rizavelioglu.github.io/tryoffdiff/)] ![Code](https://img.shields.io/github/stars/rizavelioglu/tryoffdiff/?style=social&label=Star)\n\n[arxiv 2025.04] 3DV-TON: Textured 3D-Guided Consistent Video Try-on via Diffusion Models  [[PDF](https://arxiv.org/abs/2504.17414),[Page](https://2y7c3.github.io/3DV-TON/)]\n\n[arxiv 2025.05]  Pursuing Temporal-Consistent Video Virtual Try-On via Dynamic Pose Interaction [[PDF](https://arxiv.org/abs/2505.16980)]\n\n[arxiv 2025.06] ChronoTailor: Harnessing Attention Guidance for Fine-Grained Video Virtual Try-On  [[PDF](https://arxiv.org/abs/2506.05858)]\n\n[arxiv 2025.07] OmniVTON: Training-Free Universal Virtual Try-On  [[PDF](https://arxiv.org/abs/2507.15037),[Page](https://github.com/Jerome-Young/OmniVTON)] \n\n[arxiv 2025.07]  FW-VTON: Flattening-and-Warping for Person-to-Person Virtual Try-on [[PDF](https://arxiv.org/pdf/2507.16010)]\n\n[arxiv 2025.08] One Model For All: Partial Diffusion for Unified Try-On and Try-Off in Any Pose  [[PDF](https://arxiv.org/abs/2508.04559),[Page](https://onemodelforall.github.io/)] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)\n\n[arxiv 2025.08] MuGa-VTON: Multi-Garment Virtual Try-On via Diffusion Transformers with Prompt Customization  [[PDF](https://arxiv.org/pdf/2508.08488)]\n\n[arxiv 2025.08] DualFit: A Two-Stage Virtual Try-On via Warping and Synthesis  [[PDF](https://arxiv.org/abs/2508.12131)]\n\n[arxiv 2025.08]  OmniTry: Virtual Try-On Anything without Masks [[PDF](https://arxiv.org/abs/2508.13632),[Page](https://omnitry.github.io/)] ![Code](https://img.shields.io/github/stars/Kunbyte-AI/OmniTry?style=social&label=Star)\n\n[arxiv 2025.08] JCo-MVTON: Jointly Controllable Multi-Modal Diffusion Transformer for Mask-Free Virtual Try-on  [[PDF](https://arxiv.org/abs/2508.17614),[Page](https://github.com/damo-cv/JCo-MVTON)] ![Code](https://img.shields.io/github/stars/damo-cv/JCo-MVTON?style=social&label=Star)\n\n[arxiv 2025.08] FastFit: Accelerating Multi-Reference Virtual Try-On via Cacheable Diffusion Models  [[PDF](https://arxiv.org/abs/2508.20586),[Page](https://github.com/Zheng-Chong/FastFit)] ![Code](https://img.shields.io/github/stars/Zheng-Chong/FastFit?style=social&label=Star)\n\n[arxiv 2025.08]  Dress&Dance: Dress up and Dance as You Like It [[PDF](https://arxiv.org/abs/2508.21070),[Page](https://immortalco.github.io/DressAndDance/)] \n\n[arxiv 2025.09] Virtual Fitting Room: Generating Arbitrarily Long Videos of Virtual Try-On from a Single Image -- Technical Preview  [[PDF](https://arxiv.org/abs/2509.04450),[Page](https://immortalco.github.io/VirtualFittingRoom/)] \n\n[arxiv 2025.09]  HoloGarment: 360° Novel View Synthesis of In-the-Wild Garments [[PDF](https://arxiv.org/abs/2509.12187),[Page](https://johannakarras.github.io/HoloGarment/)] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)\n\n[arxiv 2025.09] Efficient Encoder-Free Pose Conditioning and Pose Control for Virtual Try-On  [[PDF](https://arxiv.org/abs/2509.20343),[Page](https://pose-vton.github.io/vto-pose-conditioning/)] \n\n[arxiv 2025.10]  AvatarVTON: 4D Virtual Try-On for Animatable Avatars [[PDF](https://arxiv.org/abs/2510.04822)]\n\n[arxiv 2025.10]  DiT-VTON: Diffusion Transformer Framework for Unified Multi-Category Virtual Try-On and Virtual Try-All with Integrated Image Editing [[PDF](https://arxiv.org/abs/2510.04797)]\n\n[arxiv 2025.12] FitControler: Toward Fit-Aware Virtual Try-On  [[PDF](https://arxiv.org/pdf/2512.24016)]\n\n[arxiv 2026.03] MOBILE-VTON: High-Fidelity On-Device Virtual Try-On  [[PDF](https://arxiv.org/pdf/2603.00947)]\n\n[arxiv 2026.03] Garments2Look: A Multi-Reference Dataset for High-Fidelity Outfit-Level Virtual Try-On with Clothing and Accessories  [[PDF](https://arxiv.org/abs/2603.14153),[Page](https://artmesciencelab.github.io/Garments2Look)]\n\n[arxiv 2026.03] PROMO: Promptable Outfitting for Efficient High-Fidelity Virtual Try-On  [[PDF](https://arxiv.org/abs/2603.11675)]\n\n[arxiv 2026.03] VTEdit-Bench: A Comprehensive Benchmark for Multi-Reference Image Editing Models in Virtual Try-On  [[PDF](https://arxiv.org/abs/2603.11734)]\n\n[arxiv 2026.03] OmniDiT: Extending Diffusion Transformer to Omni-VTON Framework [[[PDF](https://arxiv.org/abs/2603.19643)]]\n\n[arxiv 2026.03] Dress-ED: Instruction-Guided Editing for Virtual Try-On and Try-Off  [[PDF](https://arxiv.org/abs/2603.22607)]\n\n[arxiv 2026.03]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)\n\n\n## Model adapatation/Merge \n[arxiv 2023.12]X-Adapter: Adding Universal Compatibility of Plugins for Upgraded Diffusion Model [[PDF](https://arxiv.org/abs/2312.02238),[Page](https://showlab.github.io/X-Adapter/)]\n\n[arxiv 2024.10]  Model merging with SVD to tie the Knots [[PDF](https://arxiv.org/abs/2410.19735),[Page](https://github.com/gstoica27/KnOTS)]\n\n\n[arxiv 2026.03]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)\n\n\n\n\n## Text \n[arxiv 2023.12]UDiffText: A Unified Framework for High-quality Text Synthesis in Arbitrary Images via Character-aware Diffusion Models [[PDF](https://arxiv.org/abs/2312.04884)]\n\n[arxiv 2023.12]Brush Your Text: Synthesize Any Scene Text on Images via Diffusion Model [[PDF](https://arxiv.org/abs/2312.12232)]\n\n[arxiv 2024.04]Glyph-ByT5: A Customized Text Encoder for Accurate Visual Text Rendering [[PDF](https://arxiv.org/abs/2403.09622),[Page](https://glyph-byt5.github.io/)]\n\n[arxiv 2024.05] CustomText: Customized Textual Image Generation using Diffusion Models [[PDF](https://arxiv.org/abs/2405.12531)]\n\n[arxiv 2024.06] SceneTextGen: Layout-Agnostic Scene Text Image Synthesis with Diffusion Models [[PDF](https://arxiv.org/abs/2406.01062)]\n\n[arxiv 2024.06] FontStudio: Shape-Adaptive Diffusion Model for Coherent and Consistent Font Effect Generation [[PDF](https://arxiv.org/abs/2406.08392),[Page](https://font-studio.github.io/)]\n\n[arxiv 2024.09] DiffusionPen: Towards Controlling the Style of Handwritten Text Generation  [[PDF](https://arxiv.org/abs/2409.06065),[Page](https://github.com/koninik/DiffusionPen)]\n\n[arxiv 2024.10] TextCtrl: Diffusion-based Scene Text Editing with Prior Guidance Control  [[PDF](https://arxiv.org/abs/2410.10133),[Page]()]\n\n[arxiv 2024.10]  TextMaster: Universal Controllable Text Edit [[PDF](https://arxiv.org/abs/2410.09879),[Page]()]\n\n[arxiv 2024.11] AnyText2: Visual Text Generation and Editing With Customizable Attributes  [[PDF](https://arxiv.org/abs/2411.15245),[Page](https://github.com/tyxsspa/AnyText2)] ![Code](https://img.shields.io/github/stars/tyxsspa/AnyText2?style=social&label=Star)\n\n[arxiv 2024.11] Conditional Text-to-Image Generation with Reference Guidance  [[PDF](https://arxiv.org/abs/2411.16713)] \n\n[arxiv 2024.12] Type-R: Automatically Retouching Typos for Text-to-Image Generation  [[PDF](https://arxiv.org/abs/2411.18159)] \n\n[arxiv 2024.12] FonTS: Text Rendering with Typography and Style Controls  [[PDF](https://arxiv.org/pdf/2412.00136)]\n\n[arxiv 2024.12]  FlowEdit: Inversion-Free Text-Based Editing Using Pre-Trained Flow Models [[PDF](https://arxiv.org/abs/2412.08629),[Page](https://matankleiner.github.io/flowedit/)] ![Code](https://img.shields.io/github/stars/fallenshock/FlowEdit?style=social&label=Star)\n\n[arxiv 2025.02]  Precise Parameter Localization for Textual Generation in Diffusion Models [[PDF](https://arxiv.org/abs/2502.09935),[Page](https://t2i-text-loc.github.io/)] \n\n[arxiv 2025.02]  ControlText: Unlocking Controllable Fonts in Multilingual Text Rendering without Font Annotations [[PDF](https://arxiv.org/pdf/2502.10999),[Page](https://github.com/bowen-upenn/ControlText)] ![Code](https://img.shields.io/github/stars/bowen-upenn/ControlText?style=social&label=Star)\n\n[arxiv 2025.03]  Recognition-Synergistic Scene Text Editing [[PDF](https://arxiv.org/pdf/2503.08387),[Page](https://github.com/ZhengyaoFang/RS-STE)] ![Code](https://img.shields.io/github/stars/ZhengyaoFang/RS-STE?style=social&label=Star)\n\n[arxiv 2025.03] Beyond Words: Advancing Long-Text Image Generation via Multimodal Autoregressive Models  [[PDF](https://arxiv.org/html/2503.20198v1),[Page](https://fingerrec.github.io/longtextar/)] \n\n[arxiv 2025.04]  Point-Driven Interactive Text and Image Layer Editing Using Diffusion Models [[PDF](https://arxiv.org/abs/2504.14108)]\n\n[arxiv 2025.05]  FLUX-Text: A Simple and Advanced Diffusion Transformer Baseline for Scene Text Editing [[PDF](https://arxiv.org/abs/2505.03329)]\n\n[arxiv 2025.05]  TextFlux: An OCR-Free DiT Model for High-Fidelity Multilingual Scene Text Synthesis [[PDF](https://arxiv.org/abs/2505.17778),[Page](https://yyyyyxie.github.io/textflux-site/)] ![Code](https://img.shields.io/github/stars/yyyyyxie/textflux?style=social&label=Star)\n\n[arxiv 2025.06]  EasyText: Controllable Diffusion Transformer for Multilingual Text Rendering [[PDF](),[Page](https://fontadapter.github.io/)] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)\n\n[arxiv 2025.06] FontAdapter: Instant Font Adaptation in Visual Text Generation  [[PDF](https://arxiv.org/abs/2506.05843),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)\n\n[arxiv 2025.06]  Calligrapher: Freestyle Text Image Customization [[PDF](https://arxiv.org/abs/2506.24123),[Page](https://calligrapher2025.github.io/Calligrapher)] ![Code](https://img.shields.io/github/stars/Calligrapher2025/Calligrapher?style=social&label=Star)\n\n[arxiv 2025.07] UniGlyph: Unified Segmentation-Conditioned Diffusion for Precise Visual Text Synthesis  [[PDF](https://arxiv.org/pdf/2507.00992)]\n\n[arxiv 2025.07] WordCraft: Interactive Artistic Typography with Attention Awareness and Noise Blending  [[PDF](https://arxiv.org/pdf/2507.09573)]\n\n[arxiv 2025.10]  SceneTextStylizer: A Training-Free Scene Text Style Transfer Framework with Diffusion Model [[PDF](https://arxiv.org/abs/2510.10910)]\n\n[arxiv 2025.10] OmniText: A Training-Free Generalist for Controllable Text-Image Manipulation  [[PDF](https://arxiv.org/abs/2510.24093)]\n\n[arxiv 2026.03] GlyphPrinter: Region-Grouped Direct Preference Optimization for Glyph-Accurate Visual Text Rendering  [[PDF](https://arxiv.org/abs/2603.15616),[Page](https://henghuiding.com/GlyphPrinter/)] ![Code](https://img.shields.io/github/stars/FudanCVL/GlyphPrinter?style=social&label=Star)\n\n[arxiv 2026.03] WeEdit: A Dataset, Benchmark and Glyph-Guided Framework for Text-centric Image Editing  [[PDF](https://arxiv.org/abs/2603.11593)]\n\n\n[arxiv 2026.03]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)\n\n\n\n\n## Caption \n\n[arxiv 2024.10] CtrlSynth: Controllable Image Text Synthesis for Data-Efficient Multimodal Learning  [[PDF](https://arxiv.org/abs/2410.11963)]\n\n[arxiv 2024.10] Altogether: Image Captioning via Re-aligning Alt-text  [[PDF](https://arxiv.org/abs/2410.17251),[Page]()]\n\n[arxiv 2024.11] Precision or Recall? An Analysis of Image Captions for Training Text-to-Image Generation Model  [[PDF](https://arxiv.org/abs/2411.05079),[Page](https://github.com/shengcheng/Captions4T2I)]\n\n[arxiv 2025.02]  Decoder-Only LLMs are Better Controllers for Diffusion Models [[PDF](https://arxiv.org/pdf/2502.04412)]\n\n[arxiv 2025.02]  LDGen: Enhancing Text-to-Image Synthesis via Large Language Model-Driven Language Representation [[PDF](https://arxiv.org/abs/2502.18302),[Page](https://zrealli.github.io/LDGen/)] ![Code](https://img.shields.io/github/stars/zrealli/LDGen?style=social&label=Star)\n\n[arxiv 2025.04] Generating Fine Details of Entity Interactions [[PDF](https://arxiv.org/abs/2504.08714),[Page](https://concepts-ai.com/p/detailscribe/)] ![Code](https://img.shields.io/github/stars/gxy000/DetailScribe?style=social&label=Star)\n\n[arxiv 2025.04] Describe Anything: Detailed Localized Image and Video Captioning  [[PDF](https://arxiv.org/abs/2504.16072),[Page](https://describe-anything.github.io/)] ![Code](https://img.shields.io/github/stars/NVlabs/describe-anything?style=social&label=Star)\n\n[arxiv 2025.10] GenPilot: A Multi-Agent System for Test-Time Prompt Optimization in Image Generation  [[PDF](https://arxiv.org/abs/2510.07217),[Page](https://github.com/27yw/GenPilot)] ![Code](https://img.shields.io/github/stars/27yw/GenPilot?style=social&label=Star)\n\n[arxiv 2025.10] Grasp Any Region: Towards Precise, Contextual Pixel Understanding for Multimodal LLMs  [[PDF](https://arxiv.org/abs/2510.18876),[Page](https://github.com/Haochen-Wang409/Grasp-Any-Region)] ![Code](https://img.shields.io/github/stars/Haochen-Wang409/Grasp-Any-Region?style=social&label=Star)\n\n[arxiv 2026.03] FineViT: Progressively Unlocking Fine-Grained Perception with Dense Recaptions  [[PDF](https://arxiv.org/abs/2603.17326)]\n\n\n[arxiv 2026.03]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)\n\n\n## face swapping \n[arxiv 2024.03]Infinite-ID: Identity-preserved Personalization via ID-semantics Decoupling Paradigm [[PDF](https://arxiv.org/abs/2403.11781),[Page](https://infinite-id.github.io/)]\n\n[github] [Reactor](https://github.com/Gourieff/sd-webui-reactor)\n\n[arxiv 2024.11] MegaPortrait: Revisiting Diffusion Control for High-fidelity Portrait Generation  [[PDF](https://arxiv.org/abs/2411.04357)]\n\n[arxiv 2026.03]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)\n \n## Concept / personalization\n*[Arxiv.2208; NVIDIA]  ***An Image is Worth One Word:*** Personalizing Text-to-Image Generation using Textual Inversion [[PDF](https://arxiv.org/abs/2208.01618), [Page](https://github.com/rinongal/textual_inversion), ![Code](https://img.shields.io/github/stars/rinongal/textual_inversion?style=social&label=Star)\n\n[NIPS 22; google] ***DreamBooth***: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation [[PDF](https://arxiv.org/abs/2208.12242), [Page](https://dreambooth.github.io/), [Code](https://github.com/XavierXiao/Dreambooth-Stable-Diffusion)] ![Code](https://img.shields.io/github/stars/XavierXiao/Dreambooth-Stable-Diffusion?style=social&label=Star)\n\n[arxiv 2022.12; UT] Multiresolution Textual Inversion [[PDF]](https://arxiv.org/abs/2210.16056)  \n\n*[arxiv 2022.12]Multi-Concept Customization of Text-to-Image Diffusion \\[[PDF](https://arxiv.org/abs/2212.04488), [Page](https://www.cs.cmu.edu/~custom-diffusion/), [code](https://github.com/adobe-research/custom-diffusion)\\] ![Code](https://img.shields.io/github/stars/adobe-research/custom-diffusion?style=social&label=Star)\n\n[arxiv 2023.02]ELITE: Encoding Visual Concepts into Textual Embeddings for Customized Text-to-Image Generation [[PDF](https://arxiv.org/abs/2302.13848)]\n\n[arxiv 2023.02, tel]Designing an Encoder for Fast Personalization of Text-to-Image Models [[PDF](https://arxiv.org/abs/2302.12228), [Page](https://tuning-encoder.github.io/)]\n\n[arxiv 2023.03]Cones: Concept Neurons in Diffusion Models for Customized Generation [[PDF](https://arxiv.org/abs/2303.05125)]\n\n[arxiv 2023.03]P+: Extended Textual Conditioning in Text-to-Image Generation [[PDF](https://prompt-plus.github.io/files/PromptPlus.pdf)]\n\n[arxiv 2023.03]Highly Personalized Text Embedding for Image Manipulation by Stable Diffusion [[PDF](https://arxiv.org/abs/2303.08767)]\n\n->[arxiv 2023.04]Continual Diffusion: Continual Customization of Text-to-Image Diffusion with C-LoRA[[PDF](https://arxiv.org/abs/2304.06027), [Page](https://jamessealesmith.github.io/continual-diffusion/)]\n\n[arxiv 2023.04]Controllable Textual Inversion for Personalized Text-to-Image Generation [[PDF](https://arxiv.org/abs/2304.05265)]\n\n*[arxiv 2023.04]InstantBooth: Personalized Text-to-Image Generation without Test-Time Finetuning [[PDF](https://arxiv.org/abs/2304.03411)]\n\n[arxiv 2023.05]Mix-of-Show: Decentralized Low-Rank Adaptation for Multi-Concept Customization of Diffusion Models [[PDF](https://arxiv.org/abs/2305.18292),[Page](https://showlab.github.io/Mix-of-Show/)] ![Code](https://img.shields.io/github/stars/TencentARC/Mix-of-Show?style=social&label=Star)\n\n[arxiv 2023.05]Custom-Edit: Text-Guided Image Editing with Customized Diffusion Models [[PDF](https://arxiv.org/abs/2305.15779)]\n\n[arxiv 2023.05]DisenBooth: Disentangled Parameter-Efficient Tuning for Subject-Driven Text-to-Image Generation [[PDF](https://arxiv.org/abs/2305.03374)]\n\n[arxiv 2023.05]PHOTOSWAP:Personalized Subject Swapping in Images [[PDF](https://arxiv.org/abs/2305.18286)]\n\n[Siggraph 2023.05]Key-Locked Rank One Editing for Text-to-Image Personalization [[PDF](https://arxiv.org/abs/2305.01644), [Page](https://research.nvidia.com/labs/par/Perfusion/)]\n\n[arxiv 2023.05]A Neural Space-Time Representation for Text-to-Image Personalization [[PDF](https://arxiv.org/abs/2305.15391),[Page](https://neuraltextualinversion.github.io/NeTI/)] ![Code](https://img.shields.io/github/stars/NeuralTextualInversion/NeTI?style=social&label=Star)\n\n->[arxiv 2023.05]BLIP-Diffusion: Pre-trained Subject Representation for Controllable Text-to-Image Generation and Editing [[PDF](https://arxiv.org/abs/2305.14720), [Page](https://github.com/salesforce/LAVIS/tree/main/projects/blip-diffusion)]\n\n[arxiv 2023.05]Concept Decomposition for Visual Exploration and Inspiration[[PDF](https://arxiv.org/abs/2305.18203),[Page](https://inspirationtree.github.io/inspirationtree/)] ![Code](https://img.shields.io/github/stars/google/inspiration_tree?style=social&label=Star)\n\n[arxiv 2023.05]FastComposer: Tuning-Free Multi-Subject Image Generation with Localized Attention[[PDF](https://arxiv.org/abs/2305.10431),[Page](https://github.com/mit-han-lab/fastcomposer)] ![Code](https://img.shields.io/github/stars/mit-han-lab/fastcomposer?style=social&label=Star)\n\n[arxiv 2023.06]Cones 2: Customizable Image Synthesis with Multiple Subjects [[PDF](https://arxiv.org/abs/2305.19327)]\n\n[arxiv 2023.06]Inserting Anybody in Diffusion Models via Celeb Basis [[PDF](https://arxiv.org/abs/2306.00926), [Page](https://celeb-basis.github.io/)] ![Code](https://img.shields.io/github/stars/ygtxr1997/CelebBasis?style=social&label=Star)\n\n->[arxiv 2023.06]A-STAR: Test-time Attention Segregation and Retention for Text-to-image Synthesis\n[[PDF](https://arxiv.org/pdf/2306.14544.pdf)]\n\n[arxiv 2023.06]Generate Anything Anywhere in Any Scene [[PDF](https://arxiv.org/abs/2306.17154),[Page](https://yuheng-li.github.io/PACGen/)] ![Code](https://img.shields.io/github/stars/Yuheng-Li/PACGen?style=social&label=Star)\n\n[arxiv 2023.07]HyperDreamBooth: HyperNetworks for Fast Personalization of Text-to-Image Models [[PDF](https://arxiv.org/abs/2307.06949),[Page](https://hyperdreambooth.github.io/)]\n\n[arxiv 2023.07]Domain-Agnostic Tuning-Encoder for Fast Personalization of Text-To-Image Models [[PDF](https://arxiv.org/abs/2307.06925), [Page](https://datencoder.github.io/)]\n\n[arxiv 2023.07]ReVersion: Diffusion-Based Relation Inversion from Images [[PDF](https://arxiv.org/abs/2303.13495),[Page](https://ziqihuangg.github.io/projects/reversion.html)] ![Code](https://img.shields.io/github/stars/ziqihuangg/ReVersion?style=social&label=Star)\n\n[arxiv 2023.07]AnyDoor: Zero-shot Object-level Image Customization [[PDF](https://arxiv.org/abs/2307.09481),[Page](https://github.com/ali-vilab/AnyDoor)] ![Code](https://img.shields.io/github/stars/ali-vilab/AnyDoor?style=social&label=Star)\n\n[arxiv 2023.0-7]Subject-Diffusion:Open Domain Personalized Text-to-Image Generation without Test-time Fine-tuning [[PDF](https://arxiv.org/abs/2307.11410), [Page](https://oppo-mente-lab.github.io/subject_diffusion/)] ![Code](https://img.shields.io/github/stars/OPPO-Mente-Lab/Subject-Diffusion?style=social&label=Star)\n\n[arxiv 2023.08]ConceptLab: Creative Generation using Diffusion Prior Constraints [[PDF](https://arxiv.org/abs/2308.02669),[Page](https://kfirgoldberg.github.io/ConceptLab/)] ![Code](https://img.shields.io/github/stars/kfirgoldberg/ConceptLab?style=social&label=Star)\n\n[arxiv 2023.08]Unified Concept Editing in Diffusion Models [[PDF]https://arxiv.org/pdf/2308.14761.pdf), [Page](https://unified.baulab.info/)] ![Code](https://img.shields.io/github/stars/rohitgandikota/unified-concept-editing?style=social&label=Star)\n\n[arxiv 2023.09]Create Your World: Lifelong Text-to-Image Diffusion[[PDF](https://arxiv.org/abs/2309.04430)]\n\n[arxiv 2023.09]MagiCapture: High-Resolution Multi-Concept Portrait Customization [[PDF](https://arxiv.org/abs/2309.06895)]\n\n[arxiv 2023.10]Multi-Concept T2I-Zero: Tweaking Only The Text Embeddings and Nothing Else [[PDF](https://arxiv.org/abs/2310.07419)]\n\n[arxiv 2023.11]A Data Perspective on Enhanced Identity Preservation for Diffusion Personalization [[PDF](https://arxiv.org/abs/2311.04315)]\n\n[arxiv 2023.11]The Chosen One: Consistent Characters in Text-to-Image Diffusion Models [[PDF](https://arxiv.org/abs/2311.10093), [Page](https://omriavrahami.com/the-chosen-one/)] ![Code](https://img.shields.io/github/stars/ZichengDuan/TheChosenOne?style=social&label=Star)\n\n[arxiv 2023.11]High-fidelity Person-centric Subject-to-Image Synthesis[[PDF](https://arxiv.org/abs/2311.10329)]\n\n[arxiv 2023.11]An Image is Worth Multiple Words: Multi-attribute Inversion for Constrained Text-to-Image Synthesis [[PDF](https://arxiv.org/abs/2311.11919)]\n\n[arxiv 2023.11]CatVersion: Concatenating Embeddings for Diffusion-Based Text-to-Image Personalization [[PDF](https://arxiv.org/abs/2311.14631),[Page](https://royzhao926.github.io/CatVersion-page/)] ![Code](https://img.shields.io/github/stars/RoyZhao926/CatVersion?style=social&label=Star)\n\n[arxiv 2023.12]PhotoMaker: Customizing Realistic Human Photos via Stacked ID Embedding [[PDF](https://arxiv.org/abs/2312.04461),[Page](https://photo-maker.github.io/)] ![Code](https://img.shields.io/github/stars/TencentARC/PhotoMaker?style=social&label=Star)\n\n[arxiv 2023.12]Context Diffusion: In-Context Aware Image Generation [[PDF](https://arxiv.org/abs/2312.03584)]\n\n[arxiv 2023.12]Customization Assistant for Text-to-image Generation [[PDF](https://arxiv.org/abs/2312.03045)]\n\n[arxiv 2023.12]InstructBooth: Instruction-following Personalized Text-to-Image Generation [[PDF](https://arxiv.org/abs/2312.03011)]\n\n[arxiv 2023.12]FaceStudio: Put Your Face Everywhere in Seconds [[PDF](https://arxiv.org/abs/2312.02663),[Page](https://icoz69.github.io/facestudio/)] ![Code](https://img.shields.io/github/stars/TencentQQGYLab/FaceStudio?style=social&label=Star)\n\n[arxiv 2023.12]Orthogonal Adaptation for Modular Customization of Diffusion Models [[PDF](https://arxiv.org/abs/2312.02432),[Page](https://ryanpo.com/ortha/)]\n\n[arxiv 2023.12]Separate-and-Enhance: Compositional Finetuning for Text2Image Diffusion Models [[PDF](https://arxiv.org/abs/2312.06712), [Page](https://zpbao.github.io/projects/SepEn/)] ![Code](https://img.shields.io/github/stars/adobe/SeperateAndEnhance?style=social&label=Star)\n\n[arxiv 2023.12]Compositional Inversion for Stable Diffusion Models [[PDF](https://arxiv.org/abs/2312.08048),[Page](https://github.com/zhangxulu1996/Compositional-Inversion)] ![Code](https://img.shields.io/github/stars/zhangxulu1996/Compositional-Inversion?style=social&label=Star)\n\n[arxiv 2023.12]SimAC: A Simple Anti-Customization Method against Text-to-Image Synthesis of Diffusion Models [[PDF](https://arxiv.org/abs/2312.07865)]\n\n[arxiv 2023.12]InstantID : Zero-shot Identity-Preserving Generation in Seconds [[PDF](),[Page](https://instantid.github.io/)] ![Code](https://img.shields.io/github/stars/instantX-research/InstantID?style=social&label=Star)\n\n[arxiv 2023.12]All but One: Surgical Concept Erasing with Model Preservation in Text-to-Image Diffusion Models[[PDf](https://arxiv.org/abs/2312.12807)]\n\n[arxiv 2023.12]Cross Initialization for Personalized Text-to-Image Generation [[PDF](https://arxiv.org/abs/2312.15905)]\n\n[arxiv 2023.12]PALP: Prompt Aligned Personalization of Text-to-Image Models[[PDF](https://arxiv.org/abs/2401.06105), [Page](https://prompt-aligned.github.io/)]\n\n[arxiv 2024.02]Pick-and-Draw: Training-free Semantic Guidance for Text-to-Image Personalization [[PDF](https://arxiv.org/abs/2401.16762)]\n\n[arxiv 2024.02]Separable Multi-Concept Erasure from Diffusion Models[[PDF](https://arxiv.org/abs/2402.05947)]\n\n[arxiv 2024.02]λ-ECLIPSE: Multi-Concept Personalized Text-to-Image Diffusion Models by Leveraging CLIP Latent Space[[PDF](https://arxiv.org/abs/2402.05195),[Page](https://eclipse-t2i.github.io/Lambda-ECLIPSE/)] ![Code](https://img.shields.io/github/stars/eclipse-t2i/lambda-eclipse-inference?style=social&label=Star)\n\n[arxiv 2024.02]Training-Free Consistent Text-to-Image Generation [[PDF](https://arxiv.org/abs/2402.03286),[Page](https://consistory-paper.github.io/)] ![Code](https://img.shields.io/github/stars/NVlabs/consistory?style=social&label=Star)\n\n[arxiv 2024.02]Textual Localization: Decomposing Multi-concept Images for Subject-Driven Text-to-Image Generation [[PDF](https://arxiv.org/abs/2402.09966),[Page](https://github.com/junjie-shentu/Textual-Localization)] ![Code](https://img.shields.io/github/stars/cvlab-kaist/DreamMatcher?style=social&label=Star)\n\n[arxiv 2024.02]DreamMatcher: Appearance Matching Self-Attention for Semantically-Consistent Text-to-Image Personalization [[PDF](https://arxiv.org/abs/2402.09812), [Page](https://ku-cvlab.github.io/DreamMatcher/)]\n\n[arxiv 2024.02]Direct Consistency Optimization for Compositional Text-to-Image Personalization [[PDF](https://arxiv.org/abs/2402.12004),[Page](https://dco-t2i.github.io/)] ![Code](https://img.shields.io/github/stars/kyungmnlee/dco?style=social&label=Star)\n\n[arxiv 2024.02]ComFusion: Personalized Subject Generation in Multiple Specific Scenes From Single Image [[PDF](https://arxiv.org/abs/2402.11849)]\n\n[arxiv 2024.02]Gen4Gen: Generative Data Pipeline for Generative Multi-Concept Composition[[PDF](https://arxiv.org/abs/2402.15504), [Page](https://danielchyeh.github.io/Gen4Gen/)] ![Code](https://img.shields.io/github/stars/louisYen/Gen4Gen?style=social&label=Star)\n\n[arxiv 2024.02]DiffuseKronA: A Parameter Efficient Fine-tuning Method for Personalized Diffusion Model [[PDF](https://arxiv.org/abs/2402.17412),[Page](https://diffusekrona.github.io/)] ![Code](https://img.shields.io/github/stars/IBM/DiffuseKronA?style=social&label=Star)\n\n[arxiv 2024.03]RealCustom: Narrowing Real Text Word for Real-Time Open-Domain Text-to-Image Customization [[PDF](https://arxiv.org/abs/2403.00483),[Page](https://corleone-huang.github.io/realcustom/)] ![Code](https://img.shields.io/github/stars/Corleone-Huang/RealCustomProject?style=social&label=Star)\n\n[arxiv 2024.03]Face2Diffusion for Fast and Editable Face Personalization [[PDF](https://arxiv.org/abs/2403.05094),[Page](https://mapooon.github.io/Face2DiffusionPage/)] ![Code](https://img.shields.io/github/stars/mapooon/Face2Diffusion?style=social&label=Star)\n\n[arxiv 2024.03]FaceChain-SuDe: Building Derived Class to Inherit Category Attributes for One-shot Subject-Driven Generation [[PDF](https://arxiv.org/abs/2403.06775),[Page](https://github.com/modelscope/facechain)] ![Code](https://img.shields.io/github/stars/modelscope/facechain?style=social&label=Star)\n\n[arxiv 2024.03]Block-wise LoRA: Revisiting Fine-grained LoRA for Effective Personalization and Stylization in Text-to-Image Generation [[PDF](https://arxiv.org/abs/2403.07500)]\n\n[arxiv 2024.03]LoRA-Composer: Leveraging Low-Rank Adaptation for Multi-Concept Customization in Training-Free Diffusion Models [[PDF](https://arxiv.org/abs/2403.11627),[Page](https://github.com/Young98CN/LoRA_Composer)] ![Code](https://img.shields.io/github/stars/Young98CN/LoRA_Composer?style=social&label=Star)\n\n[arxiv 2024.03]OSTAF: A One-Shot Tuning Method for Improved Attribute-Focused T2I Personalization [[PDF](https://arxiv.org/abs/2403.11053)]\n\n[arxiv 2024.03]OMG: Occlusion-friendly Personalized Multi-concept Generation in Diffusion Models [[PDF](https://arxiv.org/abs/2403.10983), [Page](https://kongzhecn.github.io/omg-project/)] ![Code](https://img.shields.io/github/stars/kongzhecn/OMG?style=social&label=Star)\n\n[arxiv 2024.03]IDAdapter: Learning Mixed Features for Tuning-Free Personalization of Text-to-Image Models [[PDF](https://arxiv.org/abs/2403.13535)]\n\n[arxiv 2024.03]Tuning-Free Image Customization with Image and Text Guidance [[PDF](https://arxiv.org/abs/2403.12658)]\n\n[ariv 2024.03]Harmonizing Visual and Textual Embeddings for Zero-Shot Text-to-Image Customization [[PDF](https://arxiv.org/abs/2403.14155),[Page](https://ldynx.github.io/harmony-zero-t2i/)] ![Code](https://img.shields.io/github/stars/ldynx/harmony-zero-t2i?style=social&label=Star)\n\n[arxiv 2024.03]FlashFace: Human Image Personalization with High-fidelity Identity Preservation [[PDF](https://arxiv.org/abs/2403.17008),[Page](https://jshilong.github.io/flashface-page)] ![Code](https://img.shields.io/github/stars/ali-vilab/FlashFace?style=social&label=Star)\n\n[arxiv 2024.03]Be Yourself: Bounded Attention for Multi-Subject Text-to-Image Generation [[PDF](https://arxiv.org/abs/2403.16990),[Page](https://omer11a.github.io/bounded-attention/)] ![Code](https://img.shields.io/github/stars/omer11a/bounded-attention?style=social&label=Star)\n\n[arxiv 2024.03]Isolated Diffusion: Optimizing Multi-Concept Text-to-Image Generation Training-Freely with Isolated Diffusion Guidance [[PDF](https://arxiv.org/abs/2403.16954)]\n\n[arxiv 2024.03]Improving Text-to-Image Consistency via Automatic Prompt Optimization [[PDF](https://arxiv.org/abs/2403.17804)]\n\n[arxiv 2024.03]Attention Calibration for Disentangled Text-to-Image Personalization [[PDF](https://github.com/Monalissaa/DisenDiff),[Page](https://arxiv.org/pdf/2403.18551.pdf)] ![Code](https://img.shields.io/github/stars/Monalissaa/DisenDiff?style=social&label=Star)\n\n[arxiv 2024.04]CLoRA: A Contrastive Approach to Compose Multiple LoRA Models [[PDF](https://arxiv.org/abs/2403.19776)]\n\n[arxiv 2024.04]MuDI: Identity Decoupling for Multi-Subject Personalization of Text-to-Image Models [[PDF](https://arxiv.org/abs/2404.04243),[Page](https://mudi-t2i.github.io/)] ![Code](https://img.shields.io/github/stars/agwmon/MuDI?style=social&label=Star)\n\n[arxiv 2024.04]Concept Weaver: Enabling Multi-Concept Fusion in Text-to-Image Models [[PDF](https://arxiv.org/abs/2404.03913)]\n\n[arxiv 2024.04]LCM-Lookahead for Encoder-based Text-to-Image Personalization [[PDF](https://arxiv.org/abs/2404.03620)]\n\n[arxiv 2024.04]MoMA: Multimodal LLM Adapter for Fast Personalized Image Generation [[PDF](https://arxiv.org/abs/2404.05674),[Page](https://moma-adapter.github.io/)] ![Code](https://img.shields.io/github/stars/bytedance/MoMA/tree/main?style=social&label=Star)\n\n[arxiv 2024.04]MC2: Multi-concept Guidance for Customized Multi-concept Generation [[PDF](https://arxiv.org/abs/2404.05268)]\n\n[arxiv 2024.04]Strictly-ID-Preserved and Controllable Accessory Advertising Image Generation [[PDF](https://arxiv.org/abs/2404.04828)]\n\n[arxiv 2024.04]OneActor: Consistent Character Generation via Cluster-Conditioned Guidance [[PDF](https://arxiv.org/abs/2404.10267)]\n\n[arxiv 2024.04] MoA: Mixture-of-Attention for Subject-Context Disentanglement in Personalized Image Generation [[PDF](https://arxiv.org/abs/2404.11565),[Page](https://snap-research.github.io/mixture-of-attention)]\n\n[arxiv 2024.04]MultiBooth: Towards Generating All Your Concepts in an Image from Text[[PDF](https://arxiv.org/abs/2404.14239),[Page](https://multibooth.github.io/)] ![Code](https://img.shields.io/github/stars/chenyangzhu1/MultiBooth?style=social&label=Star)\n\n[arxiv 2024.04]Infusion: Preventing Customized Text-to-Image Diffusion from Overfitting [[PDF](https://arxiv.org/abs/2404.14007)]\n\n[arxiv 2024.04]UVMap-ID: A Controllable and Personalized UV Map Generative Model [[PDF](https://arxiv.org/abs/2404.14568)]\n\n[arxix 2024.04]ConsistentID: Portrait Generation with Multimodal Fine-Grained Identity Preserving [[PDF](https://arxiv.org/abs/2404.16771),[Page](https://ssugarwh.github.io/consistentid.github.io/)] ![Code](https://img.shields.io/github/stars/JackAILab/ConsistentID?style=social&label=Star)\n\n[arxiv 2024.04]PuLID: Pure and Lightning ID Customization via Contrastive Alignment [[PDF](https://arxiv.org/abs/2404.16022), [Page](https://github.com/ToTheBeginning/PuLID)] ![Code](https://img.shields.io/github/stars/ToTheBeginning/PuLID?style=social&label=Star)\n\n[arxiv 2024.04] Customizing Text-to-Image Diffusion with Object Viewpoint Control  [[PDF](http://arxiv.org/abs/2404.12333),[Page](https://customdiffusion360.github.io/)] ![Code](https://img.shields.io/github/stars/customdiffusion360/custom-diffusion360?style=social&label=Star)\n\n[arxiv 2024.04]CharacterFactory: Sampling Consistent Characters with GANs for Diffusion Models [[PDF](https://arxiv.org/abs/2404.15677), [Page](https://github.com/qinghew/CharacterFactory)] ![Code](https://img.shields.io/github/stars/qinghew/CharacterFactory?style=social&label=Star)\n\n[arxiv 2024.04]TheaterGen: Character Management with LLM for Consistent Multi-turn Image Generation [[PDF](https://arxiv.org/abs/2404.18919),[Page](https://howe140.github.io/theatergen.io/)] ![Code](https://img.shields.io/github/stars/donahowe/Theatergen?style=social&label=Star)\n\n[arxiv 2024.05]Customizing Text-to-Image Models with a Single Image Pair[[PDF](https://arxiv.org/abs/2405.01536),[Page](https://paircustomization.github.io/)] ![Code](https://img.shields.io/github/stars/PairCustomization/PairCustomization?style=social&label=Star)\n\n[arxiv 2024.05]InstantFamily: Masked Attention for Zero-shot Multi-ID Image Generation [[PDF](https://arxiv.org/abs/2404.19427)]\n\n[arxiv 2024.05]MasterWeaver: Taming Editability and Identity for Personalized Text-to-Image Generation [[PDF](https://arxiv.org/abs/2405.05806),[Page](https://github.com/csyxwei/MasterWeaver)] ![Code](https://img.shields.io/github/stars/csyxwei/MasterWeaver?style=social&label=Star)\n\n[arxiv 2024.05]Training-free Subject-Enhanced Attention Guidance for Compositional Text-to-image Generation [[PDF](https://arxiv.org/abs/2405.06948)]\n\n[arxiv  2024.05]Non-confusing Generation of Customized Concepts in Diffusion Models [[PDF](https://arxiv.org/abs/2405.06914),[Page](https://clif-official.github.io/clif/)] ![Code](https://img.shields.io/github/stars/clif-official/clif_code?style=social&label=Star)\n\n[arxiv 2024.05]Personalized Residuals for Concept-Driven Text-to-Image Generation [[PDF](https://arxiv.org/abs/2405.12978),[Page](https://cusuh.github.io/personalized-residuals/)]\n\n[arxiv 2024.05] FreeCustom: Tuning-Free Customized Image Generation for Multi-Concept Composition [[PDF](https://arxiv.org/abs/2405.13870),[Page](https://github.com/aim-uofa/FreeCustom)] ![Code](https://img.shields.io/github/stars/aim-uofa/FreeCustom?style=social&label=Star)\n\n[arxiv 2024.05]AttenCraft: Attention-guided Disentanglement of Multiple Concepts for Text-to-Image Customization [[PDF](https://arxiv.org/abs/2405.17965),[Page](https://github.com/junjie-shentu/AttenCraft)] ![Code](https://img.shields.io/github/stars/junjie-shentu/AttenCraft?style=social&label=Star) \n\n[arxiv 2024.05]RectifID: Personalizing Rectified Flow with Anchored Classifier Guidance [[PDF](https://arxiv.org/abs/2405.14677),[Page](https://github.com/feifeiobama/RectifID)] ![Code](https://img.shields.io/github/stars/feifeiobama/RectifID?style=social&label=Star)\n\n[arxiv 2024.06] HyperDreamBooth: HyperNetworks for Fast Personalization of Text-to-Image Model  [[PDF](https://arxiv.org/abs/2307.06949),[Page](https://hyperdreambooth.github.io/)]\n\n[arxiv 2024.06]AutoStudio: Crafting Consistent Subjects in Multi-turn Interactive Image Generation [[PDF](https://arxiv.org/abs/2406.01388),[Page](https://howe183.github.io/AutoStudio.io/)] ![Code](https://img.shields.io/github/stars/donahowe/AutoStudio?style=social&label=Star)\n\n[arxiv 2024.06] Inv-Adapter: ID Customization Generation via Image Inversion and Lightweight Adapter[[PDF](https://arxiv.org/abs/2406.02881)]\n\n[arxiv 2024.06] AttnDreamBooth: Towards Text-Aligned Personalized Text-to-Image Generation [[PDF](https://arxiv.org/abs/2406.05000),[Page](https://attndreambooth.github.io/)] ![Code](https://img.shields.io/github/stars/lyuPang/AttnDreamBooth?style=social&label=Star)\n\n[arxiv 2024.06]Tuning-Free Visual Customization via View Iterative Self-Attention Control[[PDF](https://arxiv.org/abs/2406.06258)]\n\n[arxiv 2024.06]PaRa: Personalizing Text-to-Image Diffusion via Parameter Rank Reduction[[PDF](https://arxiv.org/pdf/2406.05641)]\n\n[arxiv 2024.06]MS-Diffusion: Multi-subject Zero-shot Image Personalization with Layout Guidance [[PDF](https://arxiv.org/abs/2406.07209), [Page](https://ms-diffusion.github.io/)] ![Code](https://img.shields.io/github/stars/MS-Diffusion/MS-Diffusion?style=social&label=Star)\n\n[arxiv 2024.06] Interpreting the Weight Space of Customized Diffusion Models[[PDF](https://arxiv.org/abs/2406.09413), [Page](https://snap-research.github.io/weights2weights)] ![Code](https://img.shields.io/github/stars/snap-research/weights2weights?style=social&label=Star)\n\n[arxiv 2024.06]DreamBench++: A Human-Aligned Benchmark for Personalized Image Generation [[PDF](https://arxiv.org/abs/2406.16855), [Page](https://dreambenchplus.github.io/)] ![Code](https://img.shields.io/github/stars/yuangpeng/dreambench_plus?style=social&label=Star)\n\n[arxiv 2024.06]Character-Adapter: Prompt-Guided Region Control for High-Fidelity Character Customization[[PDF](https://arxiv.org/abs/2406.16537)]\n\n[arxiv 2024.06]LIPE: Learning Personalized Identity Prior for Non-rigid Image Editing [[PDF](https://arxiv.org/abs/2406.17236)]\n\n[arxiv 2024.06] AlignIT: Enhancing Prompt Alignment in Customization of Text-to-Image Models  [[PDF](https://arxiv.org/abs/2406.18893)]\n\n[arxiv 2024.07] JeDi: Joint-Image Diffusion Models for Finetuning-Free Personalized Text-to-Image Generation[[PDF](https://arxiv.org/abs/2407.06187), [Page](https://research.nvidia.com/labs/dir/jedi/)]\n\n[arxiv 2024.07]LogoSticker: Inserting Logos into Diffusion Models for Customized Generation [[PDF](https://arxiv.org/abs/2407.13752), [Page](https://mingkangz.github.io/logosticker/)]\n\n[arxiv 2024.07] MovieDreamer: Hierarchical Generation for Coherent Long Visual Sequence  [[PDF](https://arxiv.org/abs/2407.16655),[Page](https://aim-uofa.github.io/MovieDreamer/)] ![Code](https://img.shields.io/github/stars/aim-uofa/MovieDreamer?style=social&label=Star)\n\n[arxiv 2024.08]Concept Conductor: Orchestrating Multiple Personalized Concepts in Text-to-Image Synthesis [[PDF](https://arxiv.org/abs/2408.03632), [Page](https://github.com/Nihukat/Concept-Conductor)] ![Code](https://img.shields.io/github/stars/Nihukat/Concept-Conductor?style=social&label=Star)\n\n[arxiv 2024.08]PreciseControl: Enhancing Text-To-Image Diffusion Models with Fine-Grained Attribute Control [[PDF](https://arxiv.org/abs/2408.05083), [Page](https://rishubhpar.github.io/PreciseControl.home/)] ![Code](https://img.shields.io/github/stars/rishubhpar/PreciseControl?style=social&label=Star) \n\n[arxiv 2024.08] DiffLoRA: Generating Personalized Low-Rank Adaptation Weights with Diffusion[[PDF](https://arxiv.org/abs/2408.06740)]\n\n[arxiv 2024.08]RealCustom++: Representing Images as Real-Word for Real-Time Customization [[PDF](https://arxiv.org/abs/2408.09744)]\n\n[arxiv 2024.08] MagicID: Flexible ID Fidelity Generation System[[PDF](https://arxiv.org/abs/2408.09248)]\n\n[arxiv 2024.08] CoRe: Context-Regularized Text Embedding Learning for Text-to-Image Personalization[[PDF](https://arxiv.org/abs/2408.15914)]\n\n[arxiv 2024.09] CustomContrast: A Multilevel Contrastive Perspective For Subject-Driven Text-to-Image Customization[[PDF](https://arxiv.org/abs/2409.05606), [Page](https://cn-makers.github.io/CustomContrast/)]\n\n[arxiv 2024.09]GroundingBooth: Grounding Text-to-Image Customization [[PDF](https://arxiv.org/abs/2409.08520), [Page](https://groundingbooth.github.io/)]\n\n[arxiv 2024.09]TextBoost: Towards One-Shot Personalization of Text-to-Image Models via Fine-tuning Text Encoder [[PDF](https://arxiv.org/abs/2409.08248), [Page](https://textboost.github.io/)]  ![Code](https://img.shields.io/github/stars/nahyeonkaty/textboost?style=social&label=Star)\n\n\n[arxiv 2024.09]SaRA: High-Efficient Diffusion Model Fine-tuning with Progressive Sparse Low-Rank Adaptation [[PDF](https://export.arxiv.org/abs/2409.06633), [Page](https://sjtuplayer.github.io/projects/SaRA/)] ![Code](https://img.shields.io/github/stars/sjtuplayer/SaRA?style=social&label=Star)\n\n[arxiv 2024.09] Resolving Multi-Condition Confusion for Finetuning-Free Personalized Image Generation[[PDF](https://arxiv.org/abs/2409.17920), [Page](https://github.com/hqhQAQ/MIP-Adapter)]  ![Code](https://img.shields.io/github/stars/hqhQAQ/MIP-Adapter?style=social&label=Star)\n\n[arxiv 2024.09] Imagine yourself: Tuning-Free Personalized Image Generation[[PDF](https://arxiv.org/abs/2409.13346)]\n\n[arxiv 2024.10] Mining Your Own Secrets: Diffusion Classifier Scores for Continual Personalization of Text-to-Image Diffusion Models  [[PDF](https://arxiv.org/abs/2410.00700),]\n\n[arxiv 2024.10] Fusion is all you need: Face Fusion for Customized Identity-Preserving Image Synthesis  [[PDF](https://arxiv.org/abs/2409.19111)]\n\n[arxiv 2024.10]Event-Customized Image Generation[[PDF](https://arxiv.org/abs/2410.02483)]\n\n[arxiv 2024.10] DisEnvisioner: Disentangled and Enriched Visual Prompt for Customized Image Generation [[PDF](https://arxiv.org/abs/2410.02067),[Page](https://disenvisioner.github.io/)] ![Code](https://img.shields.io/github/stars/EnVision-Research/DisEnvisioner?style=social&label=Star)\n\n[arxiv 2024.10] HybridBooth: Hybrid Prompt Inversion for Efficient Subject-Driven Generation  [[PDF](https://arxiv.org/abs/2410.08192),[Page](https://sites.google.com/view/hybridbooth)]\n\n[arxiv 2024.10]  Learning to Customize Text-to-Image Diffusion In Diverse Context [[PDF](https://arxiv.org/abs/2410.10058)]\n\n[arxiv 2024.10]  FaceChain-FACT: Face Adapter with Decoupled Training for Identity-preserved Personalization [[PDF](https://arxiv.org/abs/2410.12312),[Page](https://github.com/modelscope/facechain)] ![Code](https://img.shields.io/github/stars/modelscope/facechain?style=social&label=Star)\n\n[arxiv 2024.10] MagicTailor: Component-Controllable Personalization in Text-to-Image Diffusion Models  [[PDF](https://arxiv.org/abs/2410.13370),[Page](https://correr-zhou.github.io/MagicTailor)] ![Code](https://img.shields.io/github/stars/correr-zhou/MagicTailor?style=social&label=Star)\n\n[arxiv 2024.10] Unbounded: A Generative Infinite Game of Character Life Simulation  [[PDF](https://arxiv.org/abs/2410.18975),[Page](https://generative-infinite-game.github.io/)]\n\n[arxiv 2024.10] How to Continually Adapt Text-to-Image Diffusion Models for Flexible Customization?  [[PDF](https://arxiv.org/abs/2410.17594),[Page](https://github.com/JiahuaDong/CIFC)] ![Code](https://img.shields.io/github/stars/JiahuaDong/CIFC?style=social&label=Star)\n\n[arxiv 2024.10] RelationBooth: Towards Relation-Aware Customized Object Generation  [[PDF](https://arxiv.org/abs/2410.23280),[Page](https://shi-qingyu.github.io/RelationBooth/)]\n\n[arxiv 2024.10]  In-Context LoRA for Diffusion Transformers [[PDF](https://arxiv.org/pdf/2410.23775),[Page](https://github.com/ali-vilab/In-Context-LoRA)] ![Code](https://img.shields.io/github/stars/ali-vilab/In-Context-LoRA?style=social&label=Star)\n\n[arxiv 2024.10] Novel Object Synthesis via Adaptive Text-Image Harmony  [[PDF](https://arxiv.org/abs/2410.20823),[Page](https://xzr52.github.io/ATIH/)] \n\n[arxiv 2024.11] Hollowed Net for On-Device Personalization of Text-to-Image Diffusion Models  [[PDF](https://arxiv.org/abs/2411.01179)]\n\n[arxiv 2024.11] DomainGallery: Few-shot Domain-driven Image Generation by Attribute-centric Finetuning  [[PDF](https://arxiv.org/abs/2411.04571),[Page](https://github.com/Ldhlwh/DomainGallery)] ![Code](https://img.shields.io/github/stars/Ldhlwh/DomainGallery?style=social&label=Star)\n\n[arxiv 2024.11] Group Diffusion Transformers are Unsupervised Multitask Learners  [[PDF](https://arxiv.org/abs/2410.15027)]\n\n[arxiv 2024.11] DreamMix: Decoupling Object Attributes for Enhanced Editability in Customized Image Inpainting  [[PDF](https://arxiv.org/abs/2411.17223),[Page](https://github.com/mycfhs/DreamMix)] ![Code](https://img.shields.io/github/stars/mycfhs/DreamMix?style=social&label=Star)\n\n[arxiv 2024.12] DreamBlend: Advancing Personalized Fine-tuning of Text-to-Image Diffusion Models  [[PDF](https://arxiv.org/abs/2411.19390)]\n\n[arxiv 2024.12]  Improving Multi-Subject Consistency in Open-Domain Image Generation with Isolation and Reposition Attention [[PDF](https://arxiv.org/abs/2411.19261)]\n\n[arxiv 2024.12]  Diffusion Self-Distillation for Zero-Shot Customized Image Generation [[PDF](https://arxiv.org/abs/2411.18616),[Page](https://primecai.github.io/dsd/)] \n\n[arxiv 2024.12] UnZipLoRA: Separating Content and Style from a Single Image  [[PDF](https://arxiv.org/abs/2412.04465),[Page](https://unziplora.github.io/)] \n\n[arxiv 2024.12]  PatchDPO: Patch-level DPO for Finetuning-free Personalized Image Generation [[PDF](https://arxiv.org/abs/2412.03177),[Page](https://github.com/hqhQAQ/PatchDPO)] ![Code](https://img.shields.io/github/stars/hqhQAQ/PatchDPO?style=social&label=Star)\n\n[arxiv 2024.12] LoRA.rar: Learning to Merge LoRAs via Hypernetworks for Subject-Style Conditioned Image Generation  [[PDF](https://arxiv.org/abs/2412.05148)]\n\n[arxiv 2024.12]  Customized Generation Reimagined: Fidelity and Editability Harmonized [[PDF](https://arxiv.org/abs/2412.04831),[Page](https://github.com/jinjianRick/DCI_ICO)] ![Code](https://img.shields.io/github/stars/jinjianRick/DCI_ICO?style=social&label=Star)\n\n[arxiv 2024.12] StoryWeaver: A Unified World Model for Knowledge-Enhanced Story Character Customization  [[PDF](https://arxiv.org/abs/2412.07375),[Page](https://github.com/Aria-Zhangjl/StoryWeaver)] ![Code](https://img.shields.io/github/stars/Aria-Zhangjl/StoryWeaver?style=social&label=Star)\n\n[arxiv 2024.12] ObjectMate: A Recurrence Prior for Object Insertion and Subject-Driven Generation  [[PDF](https://arxiv.org/abs/2412.08645),[Page](https://object-mate.com/)]\n\n[arxiv 2024.12] DECOR: Decomposition and Projection of Text Embeddings for Text-to-Image Customization  [[PDF](https://arxiv.org/abs/2412.09169)]\n\n[arxiv 2024.12]  A LoRA is Worth a Thousand Pictures [[PDF](https://arxiv.org/pdf/2412.12048)]\n\n[arxiv 2024.12] Personalized Representation from Personalized Generation  [[PDF](https://arxiv.org/abs/2412.16156),[Page](https://personalized-rep.github.io/)] ![Code](https://img.shields.io/github/stars/ssundaram21/personalized-rep?style=social&label=Star)\n\n[arxiv 2025.01] ACE++: Instruction-Based Image Creation and Editing via Context-Aware Content Filling  [[PDF](https://arxiv.org/abs/2501.02487),[Page](https://ali-vilab.github.io/ACE_plus_page/)] ![Code](https://img.shields.io/github/stars/ali-vilab/ACE_plus?style=social&label=Star)\n\n[arxiv 2025.01]  AnyStory: Towards Unified Single and Multiple Subject Personalization in Text-to-Image Generation [[PDF](https://aigcdesigngroup.github.io/AnyStory/),[Page](https://aigcdesigngroup.github.io/AnyStory/)] \n\n[arxiv 2025.01]  IC-Portrait: In-Context Matching for View-Consistent Personalized Portrait [[PDF](https://arxiv.org/abs/2501.17159)]\n\n[arxiv 2025.02] Generating Multi-Image Synthetic Data for Text-to-Image Customization  [[PDF](https://arxiv.org/abs/2502.01720),[Page](https://www.cs.cmu.edu/~syncd-project/)] ![Code](https://img.shields.io/github/stars/nupurkmr9/syncd-project?style=social&label=Star)\n\n[arxiv 2025.02] Multitwine: Multi-Object Compositing with Text and Layout Control  [[PDF](https://arxiv.org/abs/2502.05165)]\n\n[arxiv 2025.02] Beyond Fine-Tuning: A Systematic Study of Sampling Techniques in Personalized Image Generation  [[PDF](https://arxiv.org/abs/2502.05895),[Page](https://github.com/ControlGenAI/PersonGenSampler)] ![Code](https://img.shields.io/github/stars/ControlGenAI/PersonGenSampler?style=social&label=Star)\n\n[arxiv 2025.02]  FreeBlend: Advancing Concept Blending with Staged Feedback-Driven Interpolation Diffusion [[PDF](https://arxiv.org/abs/2502.05606)]\n\n[arxiv 2025.02]  E-MD3C: Taming Masked Diffusion Transformers for Efficient Zero-Shot Object Customization [[PDF](https://arxiv.org/pdf/2502.09164)]\n\n[arxiv 2025.02]  Personalized Image Generation with Deep Generative Models: A Decade Survey [[PDF](https://arxiv.org/abs/2502.13081),[Page](https://github.com/csyxwei/Awesome-Personalized-Image-Generation)] ![Code](https://img.shields.io/github/stars/csyxwei/Awesome-Personalized-Image-Generation?style=social&label=Star)\n\n[arxiv 2025.02] IP-Composer: Semantic Composition of Visual Concepts  [[PDF](https://arxiv.org/pdf/2502.13951),[Page](https://ip-composer.github.io/IP-Composer/)] ![Code](https://img.shields.io/github/stars/ip-composer/IP-Composer?style=social&label=Star)\n\n[arxiv 2025.03]  LatexBlend: Scaling Multi-concept Customized Generation with Latent Textual Blending [[PDF](https://jinjianrick.github.io/latexblend/unicanvas_ijcv.pdf),[Page](https://jinjianrick.github.io/latexblend/)] ![Code](https://img.shields.io/github/stars/jinjianRick/latexblend?style=social&label=Star)\n\n[arxiv 2025.03] DynamicID: Zero-Shot Multi-ID Image Personalization with Flexible Facial Editability  [[PDF](https://arxiv.org/pdf/2503.06505)]\n\n[arxiv 2025.03]  Personalize Anything for Free with Diffusion Transformer [[PDF](https://arxiv.org/pdf/2503.12590),[Page](https://fenghora.github.io/Personalize-Anything-Page/)] ![Code](https://img.shields.io/github/stars/fenghora/personalize-anything?style=social&label=Star)\n\n[arxiv 2025.03]  EditID: Training-Free Editable ID Customization for Text-to-Image Generation [[PDF](https://arxiv.org/pdf/2503.12526)]\n\n[arxiv 2025.03]  Visual Persona: Foundation Model for Full-Body Human Customization [[PDF](https://arxiv.org/pdf/2503.15406),[Page](https://cvlab-kaist.github.io/Visual-Persona/)] ![Code](https://img.shields.io/github/stars/cvlab-kaist/Visual-Persona?style=social&label=Star)\n\n[arxiv 2025.03]  Efficient Personalization of Quantized Diffusion Model without Backpropagation [[PDF](https://arxiv.org/pdf/2503.14868),[Page](https://ignoww.github.io/ZOODiP_project/)] ![Code](https://img.shields.io/github/stars/ignoww/ZOODiP?style=social&label=Star)\n\n[arxiv 2025.03]  InfiniteYou: Flexible Photo Recrafting While Preserving Your Identity [[PDF](https://arxiv.org/pdf/2503.16418),[Page](https://bytedance.github.io/InfiniteYou)] ![Code](https://img.shields.io/github/stars/bytedance/InfiniteYou?style=social&label=Star)\n\n[arxiv 2025.03] Zero-Shot Visual Concept Blending Without Text Guidance  [[PDF](https://arxiv.org/abs/2503.21277),[Page](https://github.com/ToyotaCRDL/Visual-Concept-Blending)] ![Code](https://img.shields.io/github/stars/ToyotaCRDL/Visual-Concept-Blending?style=social&label=Star)\n\n[arxiv 2025.03] Meta-LoRA: Meta-Learning LoRA Components for Domain-Aware ID Personalization  [[PDF](https://arxiv.org/abs/2503.22352)]\n\n[arxiv 2025.04] Consistent Subject Generation via Contrastive Instantiated Concepts  [[PDF](https://arxiv.org/abs/2503.24387),[Page](https://contrastive-concept-instantiation.github.io/)] ![Code](https://img.shields.io/github/stars/contrastive-concept-instantiation/cocoins?style=social&label=Star)\n\n[arxiv 2025.04]  Enhancing Creative Generation on Stable Diffusion-based Models [[PDF](https://arxiv.org/abs/2503.23538)]\n\n[arxiv 2025.04] Concept Lancet: Image Editing with Compositional Representation Transplant  [[PDF](https://arxiv.org/abs/2504.02828),[Page](https://peterljq.github.io/project/colan)] ![Code](https://img.shields.io/github/stars/peterljq/Concept-Lancet?style=social&label=Star)\n\n[arxiv 2025.04] InstantCharacter: Personalize Any Characters with a Scalable Diffusion Transformer Framework  [[PDF](https://arxiv.org/abs/2504.12395),[Page](https://github.com/Tencent/InstantCharacter)] ![Code](https://img.shields.io/github/stars/Tencent/InstantCharacter?style=social&label=Star)\n\n[arxiv 2025.04] FreeGraftor: Training-Free Cross-Image Feature Grafting for Subject-Driven Text-to-Image Generation  [[PDF](https://arxiv.org/abs/2504.15958),[Page](https://github.com/Nihukat/FreeGraftor)] ![Code](https://img.shields.io/github/stars/Nihukat/FreeGraftor?style=social&label=Star)\n\n[arxiv 2025.04]  Learning Joint ID-Textual Representation for ID-Preserving Image Synthesis [[PDF](https://arxiv.org/abs/2504.14202)]\n\n\n[arxiv 2025.04] DreamO: A Unified Framework for Image Customization  [[PDF](https://arxiv.org/abs/2504.16915),[Page](https://mc-e.github.io/project/DreamO/)] ![Code](https://img.shields.io/github/stars/bytedance/DreamO?style=social&label=Star)\n\n[arxiv 2025.05] Multi-party Collaborative Attention Control for Image Customization  [[PDF](https://arxiv.org/pdf/2505.01428)]\n\n[arxiv 2025.05] PIDiff: Image Customization for Personalized Identities with Diffusion Models  [[PDF](https://arxiv.org/pdf/2505.05081)]\n\n[arxiv 2025.06] Negative-Guided Subject Fidelity Optimization for Zero-Shot Subject-Driven Generation  [[PDF](https://arxiv.org/abs/2506.03621),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)\n\n[arxiv 2025.06] ShowFlow: From Robust Single Concept to Condition-Free Multi-Concept Generation  [[PDF](https://arxiv.org/pdf/2506.18493)]\n\n[arxiv 2025.06] XVerse: Consistent Multi-Subject Control of Identity and Semantic Attributes via DiT Modulation  [[PDF](https://arxiv.org/abs/2506.21416),[Page](https://bytedance.github.io/XVerse/)] ![Code](https://img.shields.io/github/stars/bytedance/XVerse?style=social&label=Star)\n\n[arxiv 2025.07] IC-Custom: Diverse Image Customization via In-Context Learning  [[PDF](https://arxiv.org/abs/2507.01926),[Page](https://liyaowei-stu.github.io/project/IC_Custom)] ![Code](https://img.shields.io/github/stars/TencentARC/IC-Custom?style=social&label=Star)\n\n[arxiv 2025.07]  FreeLoRA: Enabling Training-Free LoRA Fusion for Autoregressive Multi-Subject Personalization [[PDF](https://arxiv.org/pdf/2507.01792),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)\n\n[arxiv 2025.07]  Memory-Efficient Personalization of Text-to-Image Diffusion Models via Selective Optimization Strategies [[PDF](https://arxiv.org/pdf/2507.10029)]\n\n[arxiv 2025.07] CharaConsist: Fine-Grained Consistent Character Generation  [[PDF](https://arxiv.org/abs/2507.11533),[Page](https://murray-wang.github.io/CharaConsist/)] ![Code](https://img.shields.io/github/stars/Murray-Wang/CharaConsist?style=social&label=Star)\n\n[arxiv 2025.07] Imbalance in Balance: Online Concept Balancing in Generation Models  [[PDF](https://arxiv.org/abs/2507.13345)]\n\n[arxiv 2025.07] PositionIC: Unified Position and Identity Consistency for Image Customization  [[PDF](https://arxiv.org/pdf/2507.13861)]\n\n[arxiv 2025.07] FreeCus: Free Lunch Subject-driven Customization in Diffusion Transformers  [[PDF](https://arxiv.org/abs/2507.15249),[Page](https://github.com/Monalissaa/FreeCus)] ![Code](https://img.shields.io/github/stars/Monalissaa/FreeCus?style=social&label=Star)\n\n[arxiv 2025.08] TARA: Token-Aware LoRA for Composable Personalization in Diffusion Models  [[PDF](https://arxiv.org/pdf/2508.08812),[Page](https://github.com/YuqiPeng77/TARA)] ![Code](https://img.shields.io/github/stars/YuqiPeng77/TARA?style=social&label=Star)\n\n[arxiv 2025.08]  MM-R1: Unleashing the Power of Unified Multimodal Large Language Models for Personalized Image Generation [[PDF](https://arxiv.org/abs/2508.11433)]\n\n[arxiv 2025.08] USO: Unified Style and Subject-Driven Generation via Disentangled and Reward Learning  [[PDF](https://arxiv.org/abs/2508.18966),[Page](https://bytedance.github.io/USO/)] ![Code](https://img.shields.io/github/stars/bytedance/USO?style=social&label=Star)\n\n[arxiv 2025.09]  MOSAIC: Multi-Subject Personalized Generation via Correspondence-Aware Alignment and Disentanglement [[PDF](https://arxiv.org/pdf/2509.01977),[Page](https://bytedance-fanqie-ai.github.io/MOSAIC/)] ![Code](https://img.shields.io/github/stars/bytedance-fanqie-ai/MOSAIC?style=social&label=Star)\n\n[arxiv 2025.09] FocusDPO: Dynamic Preference Optimization for Multi-Subject Personalized Image Generation via Adaptive Focus  [[PDF](https://arxiv.org/abs/2509.01181),[Page](https://bytedance-fanqie-ai.github.io/FocusDPO/)] ![Code](https://img.shields.io/github/stars/bytedance-fanqie-ai/FocusDPO?style=social&label=Star)\n\n[arxiv 2025.09]  UMO: Scaling Multi-Identity Consistency for Image Customization via Matching Reward [[PDF](https://arxiv.org/abs/2509.06818),[Page](https://bytedance.github.io/UMO/)] ![Code](https://img.shields.io/github/stars/bytedance/UMO?style=social&label=Star)\n\n[arxiv 2025.09] EditIDv2: Editable ID Customization with Data-Lubricated ID Feature Integration for Text-to-Image Generation  [[PDF](https://arxiv.org/pdf/2509.05659)]\n\n[arxiv 2025.09] ComposeMe: Attribute-Specific Image Prompts for Controllable Human Image Generation [[PDF](https://arxiv.org/abs/2509.18092),[Page](https://fictionarry.github.io/GeoSVR-project/)] \n\n[arxiv 2025.09] Mind-the-Glitch: Visual Correspondence for Detecting Inconsistencies in Subject-Driven Generation  [[PDF](https://arxiv.org/abs/2509.21989),[Page](https://abdo-eldesokey.github.io/mind-the-glitch/)] ![Code](https://img.shields.io/github/stars/abdo-eldesokey/mind-the-glitch?style=social&label=Star)\n\n[arxiv 2025.09] MultiCrafter: High-Fidelity Multi-Subject Generation via Spatially Disentangled Attention and Identity-Aware Reinforcement Learning  [[PDF](https://arxiv.org/abs/2509.21953),[Page](https://wutao-cs.github.io/MultiCrafter/)] ![Code](https://img.shields.io/github/stars/WuTao-CS/MultiCrafter?style=social&label=Star)\n\n[arxiv 2025.10] MOLM: Mixture of LoRA Markers  [[PDF](https://arxiv.org/abs/2510.00293)]\n\n[arxiv 2025.10]  ContextGen: Contextual Layout Anchoring for Identity-Consistent Multi-Instance Generation [[PDF](https://arxiv.org/abs/2510.11000),[Page](https://nenhang.github.io/ContextGen/)] ![Code](https://img.shields.io/github/stars/nenhang/ContextGen?style=social&label=Star)\n\n[arxiv 2025.10] ReMix: Towards a Unified View of Consistent Character Generation and Editing  [[PDF](https://arxiv.org/abs/2510.10156)]\n\n[arxiv 2025.10] WithAnyone: Towards Controllable and ID-Consistent Image Generation  [[PDF](https://arxiv.org/abs/2510.14975),[Page](https://doby-xu.github.io/WithAnyone/)] ![Code](https://img.shields.io/github/stars/doby-xu/WithAnyone?style=social&label=Star)\n\n[arxiv 2025.10]  EchoDistill: Bidirectional Concept Distillation for One-Step Diffusion Personalization [[PDF](https://arxiv.org/abs/2510.20512),[Page](https://liulisixin.github.io/EchoDistill-page/)] \n\n[arxiv 2025.11] Multi-View Consistent Human Image Customization via In-Context Learning  [[PDF](https://arxiv.org/abs/2511.00293)]\n\n[arxiv 2025.12]  DynaIP: Dynamic Image Prompt Adapter for Scalable Zero-shot Personalized Text-to-Image Generation [[PDF](https://arxiv.org/abs/2512.09814)]\n\n[arxiv 2025.12]  Omni-Attribute: Open-vocabulary Attribute Encoder for Visual Concept Personalization\n [[PDF](https://arxiv.org/abs/2512.10955),[Page](https://snap-research.github.io/omni-attribute/)]\n\n[arxiv 2025.12]  3SGen: Unified Subject, Style, and Structure-Driven Image Generation with Adaptive Task-specific Memory [[PDF](https://arxiv.org/pdf/2512.19271)]\n\n[arxiv 2026.01] Re-Align: Structured Reasoning-guided Alignment for In-Context Image Generation and Editing  [[PDF](https://arxiv.org/abs/2601.05124),[Page](https://hrz2000.github.io/realign/)] \n\n[arxiv 2026.01]  Efficient Autoregressive Video Diffusion with Dummy Head [[PDF](https://arxiv.org/abs/2601.20499)]\n\n[arxiv 2026.01] Hierarchical Concept-to-Appearance Guidance for Multi-Subject Image Generation  [[PDF](https://arxiv.org/abs/2602.03448)]\n\n[arxiv 2026.02]  FlowFixer: Towards Detail-Preserving Subject-Driven Generation [[PDF](https://arxiv.org/pdf/2602.21402)]\n\n[arxiv 2026.03] IdGlow: Dynamic Identity Modulation for Multi-Subject Generation  [[PDF](https://arxiv.org/pdf/2603.00607),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)\n\n[arxiv 2026.03] AnyPhoto: Multi-Person Identity Preserving Image Generation with ID Adaptive Modulation on Location Canvas  [[PDF](https://arxiv.org/abs/2603.14770)]\n\n[arxiv 2026.03]  PureCC: Pure Learning for Text-to-Image Concept Customization [[PDF](https://arxiv.org/abs/2603.07561),[Page](https://github.com/lzc-sg/PureCC)] ![Code](https://img.shields.io/github/stars/lzc-sg/PureCC?style=social&label=Star)\n\n[arxiv 2026.03] When Identities Collapse: A Stress-Test Benchmark for Multi-Subject Personalization  [[PDF](https://arxiv.org/abs/2603.26078)]\n\n\n[arxiv 2026.03]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)\n\n\n## end of concept\n\n\n## MV Concept \n[arxiv 2025.10] MVCustom: Multi-View Customized Diffusion via Geometric Latent Rendering and Completion  [[PDF](https://arxiv.org/abs/2510.13702),[Page](https://minjung-s.github.io/mvcustom)] ![Code](https://img.shields.io/github/stars/minjung-s/MVCustom?style=social&label=Star)\n\n[arxiv 2026.03]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)\n\n\n\n## multi-object\n[arxiv 2025.06]  MultiHuman-Testbench: Benchmarking Image Generation for Multiple Humans [[PDF](https://arxiv.org/abs/2506.20879)]\n\n[arxiv 2025.07]  UNIMC: Taming Diffusion Transformer for Unified Keypoint-Guided Multi-Class Image Generation [[PDF](https://arxiv.org/pdf/2507.02713),[Page](https://unimc-dit.github.io/)] \n\n[arxiv 2025.10] SIGMA-GEN: Structure and Identity Guided Multi-subject Assembly for Image Generation  [[PDF](https://arxiv.org/pdf/%3CARXIV%20PAPER%20ID%3E.pdf),[Page](https://oindrilasaha.github.io/SIGMA-Gen/)] ![Code](https://img.shields.io/github/stars/oindrilasaha/SIGMA-Gen-Code?style=social&label=Star)\n\n[arxiv 2025.10] FreeFuse: Multi-Subject LoRA Fusion via Auto Masking at Test Time  [[PDF](https://arxiv.org/pdf/2510.23515),[Page](https://future-item.github.io/FreeFuse/)] \n\n[arxiv 2025.12]  Ar2Can: An Architect and an Artist Leveraging a Canvas for Multi-Human Generation [[PDF](https://arxiv.org/pdf/2511.22690)]\n\n[arxiv 2026.02]  UniRef-Image-Edit: Towards Scalable and Consistent Multi-Reference Image Editing [[PDF](https://arxiv.org/abs/2602.14186)]\n\n[arxiv 2026.03]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)\n\n\n## group generation \n\n[arxiv 2024.10]  In-Context LoRA for Diffusion Transformers [[PDF](https://arxiv.org/pdf/2410.23775),[Page](https://github.com/ali-vilab/In-Context-LoRA)] ![Code](https://img.shields.io/github/stars/ali-vilab/In-Context-LoRA?style=social&label=Star)\n\n[arxiv 2024.11] Large-Scale Text-to-Image Model with Inpainting is a Zero-Shot Subject-Driven Image Generator  [[PDF](https://arxiv.org/abs/2411.15466),[Page](https://diptychprompting.github.io/)] \n\n[arxiv 2024.12]  X-Prompt: Towards Universal In-Context Image Generation in Auto-Regressive Vision Language Foundation Models [[PDF](https://arxiv.org/abs/2412.01824),[Page](https://github.com/SunzeY/X-Prompt)] ![Code](https://img.shields.io/github/stars/SunzeY/X-Prompt?style=social&label=Star)\n\n[arxiv 2024.12] Generative Photography Scene-Consistent Camera Control for Realistic Text-to-Image Synthesis [[PDF](https://arxiv.org/abs/2412.02168),[Page](https://generative-photography.github.io/project/)] \n\n[arxiv 2025.03]  Piece it Together: Part-Based Concepting with IP-Priors [[PDF](https://arxiv.org/pdf/2503.10365),[Page](https://eladrich.github.io/PiT/)] ![Code](https://img.shields.io/github/stars/eladrich/PiT?style=social&label=Star)\n\n[arxiv 2025.03] ConceptGuard: Continual Personalized Text-to-Image Generation with Forgetting and Confusion Mitigation  [[PDF](https://arxiv.org/abs/2503.10358)】\n\n[arxiv 2025.03] Latent Beam Diffusion Models for Decoding Image Sequences  [[PDF](https://arxiv.org/abs/2503.20429)]\n\n[arxiv 2025.04] Consistent Subject Generation via Contrastive Instantiated Concepts  [[PDF](https://arxiv.org/abs/2503.24387)]\n\n[arxiv 2025.04] Less-to-More Generalization: Unlocking More Controllability by In-Context Generation  [[PDF](https://arxiv.org/abs/2504.02160),[Page](https://bytedance.github.io/UNO)] ![Code](https://img.shields.io/github/stars/bytedance/UNO?style=social&label=Star)\n\n[arxiv 2025.04] FlexIP: Dynamic Control of Preservation and Personality for Customized Image Generation  [[PDF](https://arxiv.org/abs/2504.07405),[Page](https://flexip-tech.github.io/flexip/#/)]\n\n[arxiv 2025.08] Scaling Group Inference for Diverse and High-Quality Generation  [[PDF](https://arxiv.org/abs/2508.15773),[Page](https://www.cs.cmu.edu/~group-inference/)] ![Code](https://img.shields.io/github/stars/GaParmar/group-inference?style=social&label=Star)\n\n[arxiv 2026.03]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)\n\n## interleave generation \n[arxiv 2025.10] IUT-Plug: A Plug-in tool for Interleaved Image-Text Generation  [[PDF](https://arxiv.org/abs/2510.10969)]\n\n[arxiv 2026.03]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)\n\n\n\n## multi-view consistency\n\n\n[arxiv 2024.12] MV-Adapter: Multi-view Consistent Image Generation Made Easy  [[PDF](https://arxiv.org/abs/2412.03632),[Page](https://huanngzh.github.io/MV-Adapter-Page/)] ![Code](https://img.shields.io/github/stars/huanngzh/MV-Adapter?style=social&label=Star)\n\n\n[arxiv 2026.03]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)\n\n\n\n\n\n## Story-telling\n\n**[ECCV 2022]** ***Story Dall-E***: Adapting pretrained text-to-image transformers for story continuation [[PDF](https://arxiv.org/pdf/2209.06192.pdf), [code](https://github.com/adymaharana/storydalle)]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/adymaharana/storydalle?style=social&label=Star)\n\n**[arxiv 22.11; Ailibaba]** Synthesizing Coherent Story with Auto-Regressive Latent Diffusion Models \\[[PDF](https://arxiv.org/pdf/2211.10950.pdf), [code](https://github.com/xichenpan/ARLDM)\\]   ![Code](https://img.shields.io/github/stars/xichenpan/ARLDM?style=social&label=Star)\n\n**[CVPR 2023]** ***Make-A-Story***: Visual Memory Conditioned Consistent Story Generation  \\[[PDF](https://arxiv.org/pdf/2211.13319.pdf) \\]  \n\n[arxiv 2023.01]An Impartial Transformer for Story Visualization [[PDF](https://arxiv.org/pdf/2301.03563.pdf)]\n\n[arxiv 2023.02]Zero-shot Generation of Coherent Storybook from Plain Text Story using Diffusion Models [[PDF](https://arxiv.org/abs/2302.03900)]\n\n[arxiv 2023.05]TaleCrafter: Interactive Story Visualization with Multiple Characters [[PDF](https://arxiv.org/abs/2305.18247), [Page](https://videocrafter.github.io/TaleCrafter/)] ![Code](https://img.shields.io/github/stars/AILab-CVC/TaleCrafter?style=social&label=Star)\n\n[arxiv 2023.06]Intelligent Grimm -- Open-ended Visual Storytelling via Latent Diffusion Models [[PDF](https://arxiv.org/abs/2306.00973), [Page](https://haoningwu3639.github.io/StoryGen_Webpage/)]  ![Code](https://img.shields.io/github/stars/haoningwu3639/StoryGen?style=social&label=Star)\n\n[arxiv 2023.08]Story Visualization by Online Text Augmentation with Context Memory[[PDF](https://arxiv.org/pdf/2308.07575.pdf)]\n\n[arxiv 2023.08]Text-Only Training for Visual Storytelling [[PDF](https://arxiv.org/pdf/2308.08881.pdf)]\n\n[arxiv 2023.08]StoryBench: A Multifaceted Benchmark for Continuous Story Visualization [[PDF](https://arxiv.org/pdf/2308.11606.pdf), [Page](https://github.com/google/storybench)]  ![Code](https://img.shields.io/github/stars/google/storybench?style=social&label=Star)\n\n[arxiv 2023.11]AutoStory: Generating Diverse Storytelling Images with Minimal Human Effort[[PDf](https://arxiv.org/abs/2311.11243),[Page](https://aim-uofa.github.io/AutoStory/)] ![Code](https://img.shields.io/github/stars/aim-uofa/AutoStory?style=social&label=Star)\n\n[arxiv 2023.12]Make-A-Storyboard: A General Framework for Storyboard with Disentangled and Merged Control [[PDF](https://arxiv.org/abs/2312.07549)]\n\n[arxiv 2023.12]CogCartoon: Towards Practical Story Visualization [[PDF](https://arxiv.org/abs/2312.10718)]\n\n[arxiv 2024.03]TARN-VIST: Topic Aware Reinforcement Network for Visual Storytelling [[PDF](https://arxiv.org/abs/2403.11550)]\n\n[arxiv 2024.05] Evolving Storytelling: Benchmarks and Methods for New Character Customization with Diffusion Models  [[PDF](https://arxiv.org/abs/2405.11852)]\n\n[arxiv 2024.07] Boosting Consistency in Story Visualization with Rich-Contextual Conditional Diffusion Models  [[PDF](https://arxiv.org/abs/2407.02482)]\n\n[arxiv 2024.07] SEED-Story: Multimodal Long Story Generation with Large Language Model  [[PDF](https://arxiv.org/abs/2407.08683),[Page](https://github.com/TencentARC/SEED-Story)] ![Code](https://img.shields.io/github/stars/TencentARC/SEED-Story?style=social&label=Star)\n\n[arxiv 2024.07] MovieDreamer: Hierarchical Generation for Coherent Long Visual Sequence  [[PDF](https://arxiv.org/abs/2407.16655),[Page](https://aim-uofa.github.io/MovieDreamer/)] ![Code](https://img.shields.io/github/stars/aim-uofa/MovieDreamer?style=social&label=Star)\n\n[arxiv 2024.08]Story3D-Agent: Exploring 3D Storytelling Visualization with Large Language Models[[PDF](https://arxiv.org/abs/2408.11801),[Page](https://yuzhou914.github.io/Story3D-Agent/)]\n\n[arxiv 2024.10] Storynizor: Consistent Story Generation via Inter-Frame Synchronized and Shuffled ID Injection  [[PDF](https://arxiv.org/abs/2409.19624)]\n\n[arxiv 2024.11]  StoryAgent: Customized Storytelling Video Generation via Multi-Agent Collaboration [[PDF](https://arxiv.org/abs/2411.04925),[Page](https://github.com/storyagent123/Comparison-of-storytelling-video-results/blob/main/demo/readme.md)]  ![Code](https://img.shields.io/github/stars/storyagent123/Comparison-of-storytelling-video-results/tree/main\n?style=social&label=Star)\n\n[arxiv 2024.12] VideoGen-of-Thought: A Collaborative Framework for Multi-Shot Video Generation  [[PDF](https://arxiv.org/abs/2011.12948),[Page](https://cheliosoops.github.io/VGoT/)] \n\n[arxiv 2024.12] DiffSensei: Bridging Multi-Modal LLMs and Diffusion Models for Customized Manga Generation  [[PDF](https://arxiv.org/abs/2412.07589),[Page](https://jianzongwu.github.io/projects/diffsensei/)] ![Code](https://img.shields.io/github/stars/jianzongwu/DiffSensei?style=social&label=Star)\n\n[arxiv 2024.12] StoryWeaver: A Unified World Model for Knowledge-Enhanced Story Character Customization  [[PDF](https://arxiv.org/abs/2412.07375),[Page](https://github.com/Aria-Zhangjl/StoryWeaver)] ![Code](https://img.shields.io/github/stars/Aria-Zhangjl/StoryWeaver?style=social&label=Star)\n\n[arxiv 2025.01] One-Prompt-One-Story: Free-Lunch Consistent Text-to-Image Generation Using a Single Prompt  [[PDF](https://arxiv.org/abs/2501.13554),[Page](https://github.com/byliutao/1Prompt1Story)] ![Code](https://img.shields.io/github/stars/byliutao/1Prompt1Story?style=social&label=Star)\n\n[arxiv 2025.01]  Bringing Characters to New Stories: Training-Free Theme-Specific Image Generation via Dynamic Visual Prompting [[PDF](https://arxiv.org/abs/2501.15641)]\n\n[arxiv 2025.04] Object Isolated Attention for Consistent Story Visualization  [[PDF](https://arxiv.org/abs/2503.23353)]\n\n[arxiv 2025.06]  ViStoryBench: Comprehensive Benchmark Suite for Story Visualization [[PDF](https://arxiv.org/abs/2505.24862),[Page](https://vistorybench.github.io/)] ![Code](https://img.shields.io/github/stars/vistorybench/vistorybench?style=social&label=Star)\n\n[arxiv 2025.06] Enhance Multimodal Consistency and Coherence for Text-Image Plan Generation  [[PDF](https://arxiv.org/abs/2506.11380),[Page](https://github.com/psunlpgroup/MPlanner)] ![Code](https://img.shields.io/github/stars/psunlpgroup/MPlanner?style=social&label=Star)\n\n[arxiv 2025.06] ViSTA: Visual Storytelling using Multi-modal Adapters for Text-to-Image Diffusion Models  [[PDF](https://arxiv.org/pdf/2506.12198)]\n\n[arxiv 2025.06]  Audit & Repair: An Agentic Framework for Consistent Story Visualization in Text-to-Image Diffusion Models [[PDF](https://arxiv.org/abs/2506.18900),[Page](https://auditandrepair.github.io/)] \n\n[arxiv 2025.06] TaleForge: Interactive Multimodal System for Personalized Story Creation  [[PDF](https://arxiv.org/abs/2506.21832)]\n\n[arxiv 2025.08] Story2Board: A Training‑Free Approach for Expressive Storyboard Generation  [[PDF](https://arxiv.org/abs/2508.09983),[Page](https://daviddinkevich.github.io/Story2Board/)] ![Code](https://img.shields.io/github/stars/daviddinkevich/Story2Board?style=social&label=Star)\n\n[arxiv 2025.08] From Image Captioning to Visual Storytelling  [[PDF](https://arxiv.org/abs/2508.14045)]\n\n[arxiv 2025.09]  Plot’n Polish: Zero-shot Story Visualization and Disentangled Editing with Text-to-Image Diffusion Models [[PDF](https://plotnpolish.github.io/#),[Page](https://plotnpolish.github.io/)] \n\n[arxiv 2025.09] TaleDiffusion: Multi-Character Story Generation with Dialogue Rendering  [[PDF](),[Page](https://github.com/ayanban011/TaleDiffusion)] ![Code](https://img.shields.io/github/stars/ayanban011/TaleDiffusion?style=social&label=Star)\n\n[arxiv 2025.10]  SceneDecorator: Towards Scene-Oriented Story Generation with Scene Planning and Scene Consistency [[PDF](https://arxiv.org/pdf/2510.22994),[Page](https://lulupig12138.github.io/SceneDecorator/)] ![Code](https://img.shields.io/github/stars/lulupig12138/SceneDecorator?style=social&label=Star)\n\n[arxiv 2025.12] IdentityStory: Taming Your Identity-Preserving Generator for Human-Centric Story Generation  [[PDF](https://arxiv.org/abs/2512.23519)]\n\n[arxiv 2026.03] Persistent Story World Simulation with Continuous Character Customization  [[[PDF](https://arxiv.org/abs/2603.16285)]]\n\n\n[arxiv 2026.03]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)\n\n\n[arxiv 2026.03]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)\n\n\n\n## Layout Generation \n[arxiv 2022.08]Layout-Bridging Text-to-Image Synthesis [[PDF](https://arxiv.org/pdf/2208.06162.pdf)]\n\n[arxiv 2023.03]Unifying Layout Generation with a Decoupled Diffusion Model [[PDF](https://arxiv.org/abs/2303.05049)]\n\n[arxiv 2023.02]LayoutDiffuse: Adapting Foundational Diffusion Models for Layout-to-Image Generation [[PDF](https://arxiv.org/pdf/2302.08908.pdf)]\n\n[arxiv 2023.03]LayoutDM: Discrete Diffusion Model for Controllable Layout Generation [[PDF](https://arxiv.org/abs/2303.08137), [Page](https://cyberagentailab.github.io/layout-dm/)]\n\n[arxiv 2023.03]LayoutDiffusion: Improving Graphic Layout Generation by Discrete Diffusion Probabilistic Models [[PDF](https://arxiv.org/abs/2303.11589)]\n\n[arxiv 2023.03]DiffPattern: Layout Pattern Generation via Discrete Diffusion[[PDF](https://arxiv.org/abs/2303.13060)]\n\n[arxiv 2023.03]Freestyle Layout-to-Image Synthesis [[PDF](https://arxiv.org/abs/2303.14412)]\n\n[arxiv 2023.04]Training-Free Layout Control with Cross-Attention Guidance [[PDF](https://arxiv.org/abs/2304.03373), [Page](https://silent-chen.github.io/layout-guidance/)]\n\n->[arxiv 2023.05]LayoutGPT: Compositional Visual Planning and Generation with Large Language Models [[PDF](https://arxiv.org/abs/2305.15393)]\n\n->[arxiv 2023.05]Visual Programming for Text-to-Image Generation and Evaluation [[PDF](https://arxiv.org/abs/2305.15328), [Page](https://vp-t2i.github.io/)]\n\n[arxiv 2023.06]Relation-Aware Diffusion Model for Controllable Poster Layout Generation [[PDF](https://arxiv.org/abs/2306.09086)]\n\n[arxiv 2023.08]LayoutLLM-T2I: Eliciting Layout Guidance from LLM for Text-to-Image Generation [[PDF](https://arxiv.org/abs/2308.05095)]\n\n[arxiv 2023.08]Learning to Generate Semantic Layouts for Higher Text-Image Correspondence in Text-to-Image Synthesis [[PDF](https://arxiv.org/abs/2308.08157)]\n\n[arxiv 2023.08]Dense Text-to-Image Generation with Attention Modulation [[PDF](https://arxiv.org/abs/2308.12964), [Page](https://github.com/naver-ai/DenseDiffusion)]\n\n[arxiv 2023.11]Retrieval-Augmented Layout Transformer for Content-Aware Layout Generation [[PDF](https://arxiv.org/abs/2311.13602),[Page](https://udonda.github.io/RALF/)]\n\n[arxiv 2023.11]Check, Locate, Rectify: A Training-Free Layout Calibration System for Text-to-Image Generation [[PDF](https://arxiv.org/abs/2311.15773)]\n\n[arxiv 2023.12]Reason out Your Layout: Evoking the Layout Master from Large Language Models for Text-to-Image Synthesis [[PDF](https://arxiv.org/abs/2311.17126)]\n\n[arxiv 2024.02]Layout-to-Image Generation with Localized Descriptions using ControlNet with\nCross-Attention Control [[PDF](https://arxiv.org/abs/2402.13404)]\n\n[arxiv 2024.02]Multi-LoRA Composition for Image Generation [[PDF](https://arxiv.org/abs/2402.16843),[Page](https://maszhongming.github.io/Multi-LoRA-Composition/)]\n\n[arxiv 2024.03]NoiseCollage: A Layout-Aware Text-to-Image Diffusion Model Based on Noise Cropping and Merging [[PDF](https://arxiv.org/abs/2403.03485)]\n\n[arxiv 2024.03]Discriminative Probing and Tuning for Text-to-Image Generation [[PDF](https://arxiv.org/abs/2403.04321),[Page](https://dpt-t2i.github.io/)]\n\n[arxiv 2024.03]DivCon: Divide and Conquer for Progressive Text-to-Image Generation [[PDF](https://arxiv.org/abs/2403.06400)]\n\n[arxiv 2024.03]LayoutFlow: Flow Matching for Layout Generation [[PDF](https://arxiv.org/pdf/2403.18187.pdf)]\n\n[arxiv 2024.05] Enhancing Image Layout Control with Loss-Guided Diffusion Models [[PDF](https://arxiv.org/abs/2405.14101)]\n\n[arxiv 2024.06]Zero-Painter: Training-Free Layout Control for Text-to-Image Synthesis [[PDF](https://arxiv.org/abs/2406.04032),[Page](https://github.com/Picsart-AI-Research/Zero-Painter)] \n\n[arxiv 2024.09] Rethinking The Training And Evaluation of Rich-Context Layout-to-Image Generation[[PDF](https://arxiv.org/abs/2409.04847),[Page](https://github.com/cplusx/rich_context_L2I/tree/main)] \n\n[arxiv 2024.09] SpotActor: Training-Free Layout-Controlled Consistent Image Generation[[PDF](https://arxiv.org/abs/2409.04801)] \n\n[arxiv 2024.09] IFAdapter: Instance Feature Control for Grounded Text-to-Image Generation  [[PDF](https://arxiv.org/abs/2409.08240),[Page](https://ifadapter.github.io/)] \n\n[arxiv 2024.09] Scribble-Guided Diffusion for Training-free Text-to-Image Generation  [[PDF](https://arxiv.org/abs/2409.08026),[Page](https://github.com/kaist-cvml/scribble-guided-diffusion)] \n\n[arxiv 2024.09] Layout-Corrector: Alleviating Layout Sticking Phenomenon in Discrete Diffusion Model  [[PDF](https://arxiv.org/abs/2409.16689),[Page](https://iwa-shi.github.io/Layout-Corrector-Project-Page/)] \n\n[arxiv 2024.10] Story-Adapter: A Training-free Iterative Framework for Long Story Visualization  [[PDF](https://arxiv.org/abs/2410.06244),[Page](https://jwmao1.github.io/storyadapter)] \n\n[arxiv 2024.11]  HouseLLM: LLM-Assisted Two-Phase Text-to-Floorplan Generation [[PDF](https://arxiv.org/abs/2411.12279)] \n\n[arxiv 2024.12] DogLayout: Denoising Diffusion GAN for Discrete and Continuous Layout Generation  [[PDF](https://arxiv.org/abs/2412.00381),[Page](https://github.com/deadsmither5/DogLayout)] ![Code](https://img.shields.io/github/stars/deadsmither5/DogLayout?style=social&label=Star) \n\n[arxiv 2024.12] VASCAR: Content-Aware Layout Generation via Visual-Aware Self-Correction  [[PDF](https://arxiv.org/abs/2412.04237)]\n\n[arxiv 2024.12] SLayR: Scene Layout Generation with Rectified Flow  [[PDF](https://arxiv.org/abs/2412.05003)]\n\n[arxiv 2025.02]  WorldCraft: Photo-Realistic 3D World Creation and Customization via LLM Agents [[PDF](https://arxiv.org/pdf/2502.15601)]\n\n[arxiv 2025.04] LayoutCoT: Unleashing the Deep Reasoning Potential of Large Language Models for Layout Generation  [[PDF](https://arxiv.org/pdf/2504.10829)]\n\n[arxiv 2025.05] Lay-Your-Scene: Natural Scene Layout Generation with Diffusion Transformers  [[PDF](https://arxiv.org/pdf/2505.04718)]\n\n[arxiv 2025.07] ReLayout: Integrating Relation Reasoning for Content-aware Layout Generation with Multi-modal Large Language Models  [[PDF](https://arxiv.org/pdf/2507.05568),[Page](https://huggingface.co/datasets/jiaxutian/ReLayout)] \n\n[arxiv 2025.10] SEGA: A Stepwise Evolution Paradigm for Content-Aware Layout Generation with Design Prior  [[PDF](https://arxiv.org/abs/2510.15749),[Page](https://brucew91.github.io/SEGA.github.io/)] ![Code](https://img.shields.io/github/stars/BruceW91/SEGA?style=social&label=Star) \n\n\n[arxiv 2026.03]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star) \n\n\n## SVG\n[arxiv 2022.11; UCB] VectorFusion: Text-to-SVG by Abstracting Pixel-Based Diffusion Models \\[[PDF](https://arxiv.org/abs/2211.11319)\\]\n\n[arxiv 2023.04]IconShop: Text-Based Vector Icon Synthesis with Autoregressive Transformers [[PDF](https://arxiv.org/abs/2304.14400), [Page](https://kingnobro.github.io/iconshop/)]\n\n[arxiv 2023.06]Image Vectorization: a Review [[PDF](https://arxiv.org/abs/2306.06441)]\n\n[arxiv 2023.06]DiffSketcher: Text Guided Vector Sketch Synthesis through Latent Diffusion Models[[PDF](https://arxiv.org/abs/2306.14685)]\n\n[arxiv 2023.09]Text-Guided Vector Graphics Customization [[PDF](https://intchous.github.io/SVGCustomization/),[Page](https://intchous.github.io/SVGCustomization/)]\n\n[arxiv 2023.09]Deep Geometrized Cartoon Line Inbetweening [[PDF](https://arxiv.org/abs/2309.16643),[Page](https://github.com/lisiyao21/AnimeInbet)]\n\n[arxiv 2023.12]VecFusion: Vector Font Generation with Diffusion [[PDF](https://arxiv.org/abs/2312.10540)]\n\n[arxiv 2023.12]StarVector: Generating Scalable Vector Graphics Code from Images[[PDF](https://arxiv.org/abs/2312.11556), [Page](https://github.com/joanrod/star-vector)]\n\n[arxiv 2023.12]SVGDreamer: Text Guided SVG Generation with Diffusion Model [[PDF](https://arxiv.org/abs/2312.16476)]\n\n[arxiv 2024.2]StrokeNUWA: Tokenizing Strokes for Vector Graphic Synthesis [[PDF](https://arxiv.org/abs/2401.17093)]\n\n[arxiv 2024.05]  NIVeL: Neural Implicit Vector Layers for Text-to-Vector Generation  [[PDF](https://arxiv.org/abs/2405.15217)]\n\n[arxiv 2024.11] Chat2SVG: Vector Graphics Generation with Large Language Models and Image Diffusion Models  [[PDF](https://arxiv.org/abs/2411.16602),[Page](https://chat2svg.github.io/)]\n\n[arxiv 2024.11]  VQ-SGen: A Vector Quantized Stroke Representation for Sketch Generation [[PDF](https://arxiv.org/abs/2411.16446)]\n\n[arxiv 2024.11]  SketchAgent: Language-Driven Sequential Sketch Generation [[PDF](https://arxiv.org/abs/2411.17673),[Page](https://yael-vinker.github.io/sketch-agent/)] ![Code](https://img.shields.io/github/stars/yael-vinker/SketchAgent?style=social&label=Star)\n\n[arxiv 2024.12]  SVGDreamer++: Advancing Editability and Diversity in Text-Guided SVG Generation [[PDF](https://arxiv.org/abs/2411.17832)]\n\n[arxiv 2025.01] NeuralSVG: An Implicit Representation for Text-to-Vector Generation  [[PDF](https://arxiv.org/abs/2501.03992),[Page](https://sagipolaczek.github.io/NeuralSVG/)] ![Code](https://img.shields.io/github/stars/SagiPolaczek/NeuralSVG?style=social&label=Star)\n\n[arxiv 2025.02] SwiftSketch: A Diffusion Model for Image-to-Vector Sketch Generation  [[PDF](https://arxiv.org/abs/2502.08642),[Page](https://swiftsketch.github.io/)] ![Code](https://img.shields.io/github/stars/swiftsketch/SwiftSketch?style=social&label=Star)\n\n[arxiv 2025.04] OmniSVG: A Unified Scalable Vector Graphics Generation Model  [[PDF](https://arxiv.org/abs/2504.06263),[Page](https://omnisvg.github.io/)] ![Code](https://img.shields.io/github/stars/OmniSVG/OmniSVG?style=social&label=Star)\n\n[arxiv 2025.06]  SVGenius: Benchmarking LLMs in SVG Understanding, Editing and Generation [[PDF](https://arxiv.org/abs/2506.03139),[Page](https://zju-real.github.io/SVGenius)] ![Code](https://img.shields.io/github/stars/ZJU-REAL/SVGenius-Bench?style=social&label=Star)\n\n[arxiv 2025.08]  SVGen: Interpretable Vector Graphics Generation with Large Language Models [[PDF](https://arxiv.org/pdf/2508.09168),[Page](https://github.com/gitcat-404/SVGen)] ![Code](https://img.shields.io/github/stars/gitcat-404/SVGen?style=social&label=Star)\n\n[arxiv 2025.10] InternSVG: Towards Unified SVG Tasks with Multimodal Large Language Models  [[PDF](https://arxiv.org/abs/2510.11341),[Page](https://hmwang2002.github.io/release/internsvg/)] ![Code](https://img.shields.io/github/stars/hmwang2002/InternSVG?style=social&label=Star)\n\n[arxiv 2025.10] RoboSVG: A Unified Framework for Interactive SVG Generation with Multi-modal Guidance  [[PDF](https://arxiv.org/abs/2510.22684)]\n\n[arxiv 2025.11]  VCode: a Multimodal Coding Benchmark with SVG as Symbolic Visual Representation [[PDF](https://arxiv.org/abs/2511.02778),[Page](https://csu-jpg.github.io/VCode)] ![Code](https://img.shields.io/github/stars/CSU-JPG/VCode?style=social&label=Star)\n\n[arxiv 2025.12] DuetSVG: Unified Multimodal SVG Generation with Internal Visual Guidance  [[PDF](https://arxiv.org/abs/2512.10894),[Page](https://intchous.github.io/DuetSVG-site)] \n\n[arxiv 2026.03]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)\n\n\n## composition & Translation\n[arxiv 2022; Google]Sketch-Guided Text-to-Image Diffusion Models \\[[PDF](https://arxiv.org/pdf/2211.13752.pdf), code\\]  \n\n[arxiv 2022.11; Microsoft]ReCo: Region-Controlled Text-to-Image Generation \\[[PDF](https://arxiv.org/pdf/2211.15518.pdf), code\\]  \n\n[arxiv 2022.11; Meta]SpaText: Spatio-Textual Representation for Controllable Image Generation  \\[[PDF](https://arxiv.org/pdf/2211.14305.pdf), code\\]  \n\n**[arxiv 2022.11; Seoul National University]** ***DATID-3D:*** Diversity-Preserved Domain Adaptation Using Text-to-Image Diffusion for 3D Generative Model. \\[[PROJECT](https://datid-3d.github.io/)]  \n\n[arxiv 2022.12]High-Fidelity Guided Image Synthesis with Latent Diffusion Models \\[[PDF](https://arxiv.org/pdf/2211.17084.pdf)\\]  \n\n[arxiv 2022.12]Fine-grained Image Editing by Pixel-wise Guidance Using Diffusion Models \\[[PDF](https://arxiv.org/pdf/2212.02024.pdf)\\]\n\n[arxiv 2022; MSRA]Paint by Example: Exemplar-based Image Editing with Diffusion Models \\[[PDF](https://arxiv.org/pdf/2211.13227.pdf), [code](https://github.com/Fantasy-Studio/Paint-by-Example)\\]  \n\n[arxiv 2022.12]Towards Practical Plug-and-Play Diffusion Models [[PDF](https://arxiv.org/pdf/2212.05973.pdf)]\n\n[arxiv 2023.01]Attend-and-Excite: Attention-Based Semantic Guidance for Text-to-Image Diffusion Models[[PDF](https://arxiv.org/abs/2301.13826)]\n\n[arxiv 2023.02]Zero-shot Image-to-Image Translation [[PDF](https://arxiv.org/abs/2302.03027), [Page](https://pix2pixzero.github.io/)]\n\n[arxiv 2023.02]Universal Guidance for Diffusion Models [[PDF](https://arxiv.org/abs/2302.07121), [Page](https://github.com/arpitbansal297/Universal-Guided-Diffusion)]\n\n[arxiv 2023.02]DiffFaceSketch: High-Fidelity Face Image Synthesis with Sketch-Guided Latent Diffusion Model [[PDF](https://arxiv.org/abs/2302.06908), ]\n\n[arxiv 2023.02]Text-Guided Scene Sketch-to-Photo Synthesis[[PDF](https://arxiv.org/abs/2302.06883),]\n\n*[arxiv 2023.02]**--T2I-Adapter--**: Learning Adapters to Dig out More Controllable Ability for Text-to-Image Diffusion Models[[PDF](https://arxiv.org/abs/2302.08453),[Code](https://github.com/TencentARC/T2I-Adapter)]\n\n[arxiv 2023.02]MultiDiffusion: Fusing Diffusion Paths for Controlled Image Generation [[PDF](https://arxiv.org/abs/2302.08113), [Page](https://multidiffusion.github.io/)]\n\n*[arxiv 2023.02] **--controlNet--** Adding Conditional Control to Text-to-Image Diffusion Models [[PDF](https://arxiv.org/abs/2302.05543)]\n\n*[arxiv 2023.02] **--composer--**  Composer: Creative and Controllable Image Synthesis with Composable Conditions [[PDF](https://arxiv.org/abs/2302.09778)]\n\n[arxiv 2023.02]Modulating Pretrained Diffusion Models for Multimodal Image Synthesis [[PDF](https://arxiv.org/abs/2302.12764)]\n\n[arxiv 2023.02]Region-Aware Diffusion for Zero-shot Text-driven Image Editing [[PDF](https://arxiv.org/abs/2302.11797)]\n\n[arxiv 2023.03]Collage Diffusion [[PDF](https://arxiv.org/abs/2303.00262)]\n\n*[arxiv 2023.01] GLIGEN: Open-Set Grounded Text-to-Image Generation [[PDF](https://arxiv.org/abs/2301.07093), [Page](https://gligen.github.io/), [Code](https://github.com/gligen/GLIGEN)]\n\n[arxiv 2023.03]GlueGen: Plug and Play Multi-modal Encoders for X-to-image Generation [[PDF](https://arxiv.org/abs/2303.10056#)]\n\n*[arxiv 2023.03]FreeDoM: Training-Free Energy-Guided Conditional Diffusion Model [[PDF](https://arxiv.org/pdf/2303.09833.pdf)]\n\n[arxiv 2023.03]DS-Fusion: Artistic Typography via Discriminated and Stylized Diffusion [[PDF](https://arxiv.org/abs/2303.09604)]\n\n*[arxiv 2023.03]PAIR-Diffusion: Object-Level Image Editing with Structure-and-Appearance Paired Diffusion Models [[PDF](https://arxiv.org/abs/2303.17546), [code](https://github.com/Picsart-AI-Research/PAIR-Diffusion)]\n\n[arxiv 2023.03]DiffCollage: Parallel Generation of Large Content with Diffusion Models [[PDF](https://arxiv.org/abs/2303.17076),[page](https://research.nvidia.com/labs/dir/diffcollage/)]\n\n[arxiv 2023.04]SketchFFusion: Sketch-guided image editing with diffusion model [[PDF](https://arxiv.org/abs/2304.03174)]\n\n[arxiv 2023.04]Training-Free Layout Control with Cross-Attention Guidance [[PDF](https://arxiv.org/abs/2304.03373), [Page](https://silent-chen.github.io/layout-guidance/)]\n\n[arxiv 2023.04]HumanSD: A Native Skeleton-Guided Diffusion Model for Human Image Generation [[PDF](https://arxiv.org/abs/2304.04269), [Page](https://idea-research.github.io/HumanSD/)]\n\n->[arxiv 2023.04]DreamPose: Fashion Image-to-Video Synthesis via Stable Diffusion [[PDF](https://arxiv.org/abs/2304.06025), [Page](https://jamessealesmith.github.io/continual-diffusion/)]\n\n-> [arxiv 2023.04]Inpaint Anything: Segment Anything Meets Image Inpainting [[PDF](https://arxiv.org/abs/2304.06790), [Page](https://github.com/geekyutao/Inpaint-Anything)]\n\n[arxiv 2023.04]Soundini: Sound-Guided Diffusion for Natural Video Editing [[PDF](https://arxiv.org/abs/2304.06818)]\n\n->[arxiv 2023.04]CONTROLLABLE IMAGE GENERATION VIA COLLAGE REPRESENTATIONS [[PDF](https://arxiv.org/abs/2304.13722)]\n\n[arxiv 2023.05]Guided Image Synthesis via Initial Image Editing in Diffusion Model [[PDF](https://arxiv.org/abs/2305.03382)]\n\n[arxiv 2023.05]DiffSketching: Sketch Control Image Synthesis with Diffusion Models [[PDF](https://arxiv.org/abs/2305.18812)]\n\n-> [arxiv 2023.05]Uni-ControlNet: All-in-One Control to Text-to-Image Diffusion Models [[PDF](https://arxiv.org/abs/2305.16322), [Page](https://github.com/ShihaoZhaoZSH/Uni-ControlNet)]\n\n-> [arxiv 2023.05]Break-A-Scene: Extracting Multiple Concepts from a Single Image [[PDF](https://arxiv.org/abs/2305.16311), [Page](https://omriavrahami.com/break-a-scene/)]\n\n[arxiv 2023.05]Prompt-Free Diffusion: Taking \"Text\" out of Text-to-Image Diffusion Models [[PDF](https://arxiv.org/abs/2305.16223), [Page](https://github.com/SHI-Labs/Prompt-Free-Diffusion)]\n\n[arxiv 2023.05]DiffBlender: Scalable and Composable Multimodal Text-to-Image Diffusion Models [[PDF](https://arxiv.org/abs/2305.15194)]\n\n[arxiv 2023.05]MaGIC: Multi-modality Guided Image Completion [[PDF](https://arxiv.org/abs/2305.11818)]\n\n[arxiv 2023.05]Text-to-image Editing by Image Information Removal [[PDF](https://arxiv.org/abs/2305.17489)]\n\n[arxiv 2023.06]Cocktail: Mixing Multi-Modality Controls for Text-Conditional Image Generation [[PDF](https://arxiv.org/abs/2306.00964), [Page](https://mhh0318.github.io/cocktail/)]\n\n[arxiv 2023.06]Grounded Text-to-Image Synthesis with Attention Refocusing [[PDF](https://arxiv.org/abs/2306.05427), [Page](https://attention-refocusing.github.io/), [Code](https://github.com/Attention-Refocusing/attention-refocusing)]\n\n[arxiv 2023.06]Energy-Based Cross Attention for Bayesian Context Update in Text-to-Image Diffusion Models [[PDF](https://arxiv.org/pdf/2306.09869.pdf)]\n\n[arxiv 2023.06]TryOnDiffusion: A Tale of Two UNets [[PDF](https://arxiv.org/abs/2306.08276)]\n\n->[arxiv 2023.06]Adding 3D Geometry Control to Diffusion Models [[PDF](https://arxiv.org/abs/2306.08103)]\n\n[arxiv 2023.06]Linguistic Binding in Diffusion Models: Enhancing Attribute Correspondence through Attention Map Alignment [[PDF](https://arxiv.org/abs/2306.08877)]\n\n[arxiv 2023.06]Continuous Layout Editing of Single Images with Diffusion Models [[PDF](https://arxiv.org/pdf/2306.13078.pdf)]\n\n[arxiv 2023.06]DreamEdit: Subject-driven Image Editing [[PDF](https://arxiv.org/abs/2306.12624),[Page](https://dreameditbenchteam.github.io/)]\n\n[arxiv 2023.06]Decompose and Realign: Tackling Condition Misalignment in Text-to-Image Diffusion Models [[PDF](https://arxiv.org/pdf/2306.14408.pdf)]\n\n[arxiv 2023.06]Zero-shot spatial layout conditioning for text-to-image diffusion models [[PDF](https://arxiv.org/pdf/2306.13754.pdf)]\n\n[arxiv 2023.06]MVDiffusion: Enabling Holistic Multi-view Image Generation with Correspondence-Aware Diffusion[[PDF],[Page](https://mvdiffusion.github.io/)]\n\n[arxiv 2023.07]BoxDiff: Text-to-Image Synthesis with Training-Free Box-Constrained Diffusion [[PDF](https://arxiv.org/pdf/2307.10816.pdf)]\n\n[arxiv 2023.08]LAW-Diffusion: Complex Scene Generation by Diffusion with Layouts [[PDF](https://arxiv.org/abs/2308.06713)]\n\n[arxiv 2023.09]DreamCom: Finetuning Text-guided Inpainting Model for Image Composition [[PDF](https://arxiv.org/abs/2309.15508)]\n\n[arxiv 2023.11]Cross-Image Attention for Zero-Shot Appearance Transfer[[PDF](https://arxiv.org/abs/2311.03335), [Page](https://garibida.github.io/cross-image-attention)]\n\n[arxiv 2023.12]SmartMask: Context Aware High-Fidelity Mask Generation for Fine-grained Object Insertion and Layout Control [[PDF](https://arxiv.org/abs/2312.05039), [Page](https://smartmask-gen.github.io/)]\n\n[arxiv 2023.12]DreamInpainter: Text-Guided Subject-Driven Image Inpainting with Diffusion Models [[PDF](https://arxiv.org/abs/2312.03771)]\n\n[arxiv 2023.12]InteractDiffusion: Interaction Control in Text-to-Image Diffusion Models [[PDF](https://arxiv.org/abs/2312.05849),[Page](https://jiuntian.github.io/interactdiffusion/)]\n\n[arxiv 2023.12]Disentangled Representation Learning for Controllable Person Image Generation [[PDF](https://arxiv.org/abs/2312.05798)]\n\n[arxiv 2023.12]A Task is Worth One Word: Learning with Task Prompts for High-Quality Versatile Image Inpainting [[PDF](https://arxiv.org/abs/2312.03594), [Page](https://powerpaint.github.io/)]\n\n[arxiv 2023.12]FreeControl: Training-Free Spatial Control of Any Text-to-Image Diffusion Model with Any Condition [[PDF](https://arxiv.org/abs/2312.07536),[Page](https://genforce.github.io/freecontrol/)]\n\n[arxiv 2023.12]FineControlNet: Fine-level Text Control for Image Generation with Spatially Aligned Text Control Injection [[PDF](https://arxiv.org/abs/2312.09252),[Page](https://samsunglabs.github.io/FineControlNet-project-page/)]\n\n[arxiv 2023.12]Local Conditional Controlling for Text-to-Image Diffusion Models [[PDF](https://arxiv.org/abs/2312.08768)]\n\n[arxiv 2023.12]SCEdit: Efficient and Controllable Image Diffusion Generation via Skip Connection Editing [[PDF](https://arxiv.org/abs/2312.11392), [Page](https://scedit.github.io/)]\n\n[arxiv 2023.12]HD-Painter: High-Resolution and Prompt-Faithful Text-Guided Image Inpainting with Diffusion Models [[PDF](https://arxiv.org/abs/2312.14091)]\n\n[arxiv 2023.12]Semantic Guidance Tuning for Text-To-Image Diffusion Models[[PDF](https://arxiv.org/abs/2312.15964),[Page](https://korguy.github.io/)]\n\n[arxiv 2024.1]ReplaceAnything as you want: Ultra-high quality content replacement[[PDF](),[Page](https://aigcdesigngroup.github.io/replace-anything/?ref=aiartweekly)]\n\n[arxiv 2024.01]Compose and Conquer: Diffusion-Based 3D Depth Aware Composable Image Synthesis [[PDF](https://arxiv.org/abs/2401.09048), [Page](https://github.com/tomtom1103/compose-and-conquer/)]\n\n[arxiv 2024.01]Spatial-Aware Latent Initialization for Controllable Image Generation [[PDF](https://arxiv.org/abs/2401.16157)]\n\n[arxiv 2024.02]Repositioning the Subject within Image [[PDF](https://arxiv.org/abs/2401.16861),[Page](https://yikai-wang.github.io/seele/)]\n\n[arxiv 2024.02]Cross-view Masked Diffusion Transformers for Person Image Synthesis [[PDF](https://arxiv.org/abs/2402.01516)]\n\n[arxiv 2024.02]Image Sculpting: Precise Object Editing with 3D Geometry Control[[PDF](https://arxiv.org/abs/2401.01702),[Page](https://image-sculpting.github.io/)]\n\n[arxiv 2024.02]Outline-Guided Object Inpainting with Diffusion Models [[PDF](https://arxiv.org/abs/2402.16421)]\n\n[arxiv 2024.03]Differential Diffusion: Giving Each Pixel Its Strength [[PDF](https://arxiv.org/abs/2306.00950),[Page](https://differential-diffusion.github.io/)]\n\n[arxiv 2024.03]BrushNet: A Plug-and-Play Image Inpainting Model with Decomposed Dual-Branch Diffusion [[PDF](https://arxiv.org/abs/2403.06976),[Page](https://github.com/TencentARC/BrushNet)]\n\n[arxiv 2024.03]SCP-Diff: Photo-Realistic Semantic Image Synthesis with Spatial-Categorical Joint Prior [[PDF](https://arxiv.org/pdf/2403.09638.pdf),[Page](https://air-discover.github.io/SCP-Diff/)]\n\n[arxiv 2024.03]One-Step Image Translation with Text-to-Image Models [[PDF](https://arxiv.org/abs/2403.12036), [Page](https://github.com/GaParmar/img2img-turbo)]\n\n[arxiv 2024.03]LayerDiff: Exploring Text-guided Multi-layered Composable Image Synthesis via Layer-Collaborative Diffusion Model [[PDF](https://arxiv.org/abs/2403.11929)]\n\n[arxiv 2024.03]FlexEdit Flexible and Controllable Diffusion-based Object-centric Image Editing [[PDF](https://arxiv.org/abs/2403.18605),[Page](https://flex-edit.github.io/)]\n\n[arxiv 2024.03]U-Sketch: An Efficient Approach for Sketch to Image Diffusion Models [[PDF](https://arxiv.org/abs/2403.18425)]\n\n[arxiv 2024.03]ECNet: Effective Controllable Text-to-Image Diffusion Models [[PDF](https://arxiv.org/pdf/2403.18417.pdf)]\n\n[arxiv 2024.03]ObjectDrop: Bootstrapping Counterfactuals for Photorealistic Object Removal and Insertion [[PDF](https://arxiv.org/abs/2403.18818),[Page](https://objectdrop.github.io/)]\n\n[arxiv 2024.04]LayerDiffuse:Transparent Image Layer Diffusion using Latent Transparency [[PDF](https://arxiv.org/abs/2402.17113),[Page](https://github.com/layerdiffusion/LayerDiffuse)]\n\n[arxiv 2024.04]Move Anything with Layered Scene Diffusion [[PDF](https://arxiv.org/abs/2404.07178)]\n\n[arxiv 2024.04]ControlNet++: Improving Conditional Controls with Efficient Consistency Feedback [[PDF](https://arxiv.org/abs/2404.07987),[Page](https://liming-ai.github.io/ControlNet_Plus_Plus)]\n\n[arxiv 2024.04]Salient Object-Aware Background Generation using Text-Guided Diffusion Models [[PDF](https://arxiv.org/abs/2404.10157)]\n\n[arxiv 2024.04]Ctrl-Adapter: An Efficient and Versatile Framework for Adapting Diverse Controls to Any Diffusion Model [[PDF](https://arxiv.org/abs/2404.09967), [Page](https://ctrl-adapter.github.io/)]\n\n[arxiv 2024.04]Enhancing Prompt Following with Visual Control Through Training-Free Mask-Guided Diffusion [[PDF](https://arxiv.org/abs/2404.14768)]\n\n[arxiv 2024.04]ObjectAdd: Adding Objects into Image via a Training-Free Diffusion Modification Fashion [[PDF](https://arxiv.org/abs/2404.17230)]\n\n[arxiv 2024.04]Anywhere: A Multi-Agent Framework for Reliable and Diverse Foreground-Conditioned Image Inpainting [[PDF](https://arxiv.org/abs/2404.18598),[Page](https://anywheremultiagent.github.io/)]\n\n[arxiv 2024.04]Paint by Inpaint: Learning to Add Image Objects by Removing Them First [[PDF](https://arxiv.org/abs/2404.18212), [Page](https://rotsteinnoam.github.io/Paint-by-Inpaint/)]\n\n[arxiv 2024.05]FlexEControl: Flexible and Efficient Multimodal Control for Text-to-Image Generation [[PDF](https://arxiv.org/abs/2405.04834)]\n\n[arxiv 2024.05]CTRLorALTer: Conditional LoRAdapter for Efficient 0-Shot Control & Altering of T2I Models [[PDF](https://arxiv.org/abs/2405.07913),[Page](https://compvis.github.io/LoRAdapter/)]\n\n[arxiv 2024.06]Stable-Pose: Leveraging Transformers for Pose-Guided Text-to-Image Generation [[PDF](https://arxiv.org/abs/2406.02485)]\n\n[arxiv 2024.06] FaithFill: Faithful Inpainting for Object Completion Using a Single Reference Image [[PDF](https://arxiv.org/abs/2406.07865)]\n\n[arxiv 2024.06] AnyControl: Create Your Artwork with Versatile Control on Text-to-Image Generation  [[PDF](https://arxiv.org/abs/2406.18958),[Page](https://any-control.github.io/)]\n\n[arxiv 2024.07]  Magic Insert: Style-Aware Drag-and-Drop [[PDF](https://arxiv.org/abs/2407.02489),[Page](https://magicinsert.github.io/)]\n\n[arxiv 2024.07]MIGC++: Advanced Multi-Instance Generation Controller for Image Synthesis   [[PDF](https://arxiv.org/abs/2407.02329),[Page](https://github.com/limuloo/MIGC)]\n\n[arxiv 2024.07] PartCraft: Crafting Creative Objects by Parts  [[PDF](https://arxiv.org/abs/2407.04604),[Page](https://github.com/kamwoh/partcraft)]\n\n[arxiv 2024.07] Every Pixel Has its Moments: Ultra-High-Resolution Unpaired Image-to-Image Translation via Dense Normalization  [[PDF](https://arxiv.org/abs/2407.04245),[Page](https://github.com/Kaminyou/Dense-Normalization)]\n\n[arxiv 2024.07]  FreeCompose: Generating Diverse Storytelling Images with Minimal Human Effort [[PDF](https://arxiv.org/abs/2407.04947),[Page](https://github.com/aim-uofa/FreeCompose)]\n\n[arxiv 2024.07] Sketch-Guided Scene Image Generation[[PDF](https://arxiv.org/abs/2407.06469)]\n\n[arxiv 2024.07] Training-free Composite Scene Generation for Layout-to-Image Synthesis [[PDF](https://arxiv.org/abs/2407.13609)]\n\n[arxiv 2024.07] Diffree: Text-Guided Shape Free Object Inpainting with Diffusion Model  [[PDF](https://arxiv.org/abs/2407.16982),[Page](https://github.com/OpenGVLab/Diffree)]\n\n[arxiv 2024.08] ControlNeXt: Powerful and Efficient Control for Image and Video Generation  [[PDF](https://arxiv.org/abs/2408.06070),[Page](https://github.com/dvlab-research/ControlNeXt)]\n\n[arxiv 2024.08] TraDiffusion: Trajectory-Based Training-Free Image Generation [[PDF](https://arxiv.org/abs/2408.09739),[Page](https://github.com/och-mac/TraDiffusion)]\n\n[arxiv 2024.08] RepControlNet: ControlNet Reparameterization[[PDF](https://arxiv.org/abs/2408.09240)]\n\n[arxiv 2024.08] Foodfusion: A Novel Approach for Food Image Composition via Diffusion Models  [[PDF](https://arxiv.org/abs/2408.14135)]\n\n[arxiv 2024.08]Build-A-Scene: Interactive 3D Layout Control for Diffusion-Based Image Generation[[PDF](https://arxiv.org/abs/2408.14819),[Page](https://abdo-eldesokey.github.io/build-a-scene/)]\n\n[arxiv 2024.08] GRPose: Learning Graph Relations for Human Image Generation with Pose Priors [[PDF](https://arxiv.org/abs/2408.16540),[Page](https://github.com/XiangchenYin/GRPose)]\n\n[arxiv 2024.09]  Skip-and-Play: Depth-Driven Pose-Preserved Image Generation for Any Objects [[PDF](https://arxiv.org/abs/2409.02653),[Page]()]\n\n[arxiv 2024.09] Diffusion-Based Image-to-Image Translation by Noise Correction via Prompt Interpolation  [[PDF](https://arxiv.org/abs/2409.08077),[Page]()]\n\n[arxiv 2024.09]PixWizard: Versatile Image-to-Image Visual Assistant with Open-Language Instructions[[PDF](https://arxiv.org/abs/2409.15278),[Page](https://github.com/AFeng-x/PixWizard)]\n\n[arxiv 2024.09] InstructDiffusion: A Generalist Modeling Interface for Vision Tasks  [[PDF](https://arxiv.org/pdf/2309.03895.pdf),[Page](https://gengzigang.github.io/instructdiffusion.github.io/)]\n\n[arxiv 2024.10]  Scene Graph Disentanglement and Composition for Generalizable Complex Image Generation [[PDF](https://arxiv.org/abs/2410.00447)]\n\n[arxiv 2024.10] OmniBooth: Learning Latent Control for Image Synthesis with Multi-modal Instruction  [[PDF](https://arxiv.org/abs/2410.04932),[Page](https://len-li.github.io/omnibooth-web/)]\n\n[arxiv 2024.10]  3DIS: Depth-Driven Decoupled Instance Synthesis for Text-to-Image Generation [[PDF](https://arxiv.org/abs/2410.12669),[Page](https://github.com/limuloo/3DIS)]\n\n[arxiv 2024.10] HiCo: Hierarchical Controllable Diffusion Model for Layout-to-image Generation [[PDF](https://arxiv.org/abs/2410.14324),[Page]()]\n\n[arxiv 2024.10] TopoDiffusionNet: A Topology-aware Diffusion Model  [[PDF](https://arxiv.org/abs/2410.16646)]\n\n[arxiv 2024.11] Training-free Regional Prompting for Diffusion Transformers  [[PDF](https://arxiv.org/abs/2411.02395),[Page](https://github.com/antonioo-c/Regional-Prompting-FLUX)]\n\n[arxiv 2024.11] Controlling Human Shape and Pose in Text-to-Image Diffusion Models via Domain Adaptation  [[PDF](https://arxiv.org/abs/2411.04724),[Page](https://ivpg.github.io/humanLDM/)]\n\n[arxiv 2024.11] Toward Human Understanding with Controllable Synthesis  [[PDF](https://arxiv.org/abs/2411.08663)]\n\n[arxiv 2024.11]  MagicQuill: An Intelligent Interactive Image Editing System [[PDF](https://arxiv.org/abs/2411.09703),[Page](https://magicquill.art/demo/)]\n\n[arxiv 2024.11] Generating Compositional Scenes via Text-to-image RGBA Instance Generation  [[PDF](https://arxiv.org/abs/2411.10913)]\n\n[arxiv 2024.11] Boundary Attention Constrained Zero-Shot Layout-To-Image Generation  [[PDF](https://arxiv.org/abs/2411.10495)]\n\n[arxiv 2024.11] DreamMix: Decoupling Object Attributes for Enhanced Editability in Customized Image Inpainting  [[PDF](https://arxiv.org/abs/2411.17223),[Page](https://github.com/mycfhs/DreamMix)] ![Code](https://img.shields.io/github/stars/mycfhs/DreamMix?style=social&label=Star)\n\n[arxiv 2024.12] ROICtrl: Boosting Instance Control for Visual Generatiol  [[PDF](https://arxiv.org/abs/2411.17949),[Page](https://roictrl.github.io/)] ![Code](https://img.shields.io/github/stars/showlab/ROICtrl?style=social&label=Star)\n\n[arxiv 2024.12] MFTF: Mask-free Training-free Object Level Layout Control Diffusion Model  [[PDF](https://arxiv.org/abs/2412.01284)]\n\n[arxiv 2024.12] Sketch-Guided Motion Diffusion for Stylized Cinemagraph Synthesis  [[PDF](https://arxiv.org/pdf/2412.00638)]\n\n[arxiv 2024.12] LayerFusion: Harmonized Multi-Layer Text-to-Image Generation with Generative Priors  [[PDF](https://arxiv.org/abs/2412.04460),[Page](https://layerfusion.github.io/)] \n\n[arxiv 2024.04]LTOS: Layout-controllable Text-Object Synthesis via Adaptive Cross-attention Fusions [[PDF](https://arxiv.org/abs/2404.13579)]\n\n[arxiv 2024.06]  Crafting Parts for Expressive Object Composition [[PDF](https://arxiv.org/abs/2406.10197),[Page](https://rangwani-harsh.github.io/PartCraft)]\n\n[arxiv 2024.12]  CreatiLayout: Siamese Multimodal Diffusion Transformer for\nCreative Layout-to-Image Generation [[PDF](https://arxiv.org/pdf/2412.03859),[Page](https://creatilayout.github.io/)] ![Code](https://img.shields.io/github/stars/HuiZhang0812/CreatiLayout?style=social&label=Star)\n\n[arxiv 2024.12] DynamicControl: Adaptive Condition Selection for Improved Text-to-Image Generation  [[PDF](https://arxiv.org/abs/2412.03255),[Page](https://hithqd.github.io/projects/Dynamiccontrol/)] ![Code](https://img.shields.io/github/stars/hithqd/DynamicControl?style=social&label=Star)\n\n[arxiv 2024.12] ObjectMate: A Recurrence Prior for Object Insertion and Subject-Driven Generation  [[PDF](https://arxiv.org/abs/2412.08645),[Page](https://object-mate.com/)]\n\n[arxiv 2024.12] VersaGen: Unleashing Versatile Visual Control for Text-to-Image Synthesis  [[PDF](),[Page](https://github.com/FelixChan9527/VersaGen)] ![Code](https://img.shields.io/github/stars/FelixChan9527/VersaGen?style=social&label=Star)\n\n[arxiv 2024.12]  LineArt: A Knowledge-guided Training-free High-quality Appearance Transfer for Design Drawing with Diffusion Model [[PDF](https://arxiv.org/abs/2412.11519)]\n\n[arxiv 2024.12] T3-S2S: Training-free Triplet Tuning for Sketch to Scene Generation  [[PDF](https://arxiv.org/abs/2412.13486)]\n\n[arxiv 2024.12] Affordance-Aware Object Insertion via Mask-Aware Dual Diffusion  [[PDF](https://arxiv.org/abs/2412.14462),[Page](https://kakituken.github.io/affordance-any.github.io/)] ![Code](https://img.shields.io/github/stars/KaKituken/affordance-aware-any?style=social&label=Star)\n\n[arxiv 2025.01]  MObI: Multimodal Object Inpainting Using Diffusion Models [[PDF](https://arxiv.org/pdf/2501.03173)]\n\n[arxiv 2025.01] Slot-Guided Adaptation of Pre-trained Diffusion Models for Object-Centric Learning and Compositional Generation\n  [[PDF](https://arxiv.org/abs/2501.15878),[Page](https://kaanakan.github.io/SlotAdapt/)] \n\n[arxiv 2025.02] Multitwine: Multi-Object Compositing with Text and Layout Control  [[PDF](https://arxiv.org/abs/2502.05165)]\n\n[arxiv 2025.02] ART: Anonymous Region Transformer for Variable Multi-Layer Transparent Image Generation  [[PDF](https://arxiv.org/pdf/2502.18364),[Page](https://art-msra.github.io/)] ![Code](https://img.shields.io/github/stars/microsoft/art-msra?style=social&label=Star)\n\n[arxiv 2025.02] SYNTHIA: Novel Concept Design with Affordance Composition  [[PDF](https://arxiv.org/abs/2502.17793),[Page](https://github.com/HyeonjeongHa/SYNTHIA)] ![Code](https://img.shields.io/github/stars/HyeonjeongHa/SYNTHIA?style=social&label=Star)\n\n[arxiv 2025.03]  ToLo: A Two-Stage, Training-Free Layout-To-Image Generation Framework For High-Overlap Layouts [[PDF](https://arxiv.org/pdf/2503.01667),[Page](https://github.com/misaka12435/ToLo)] ![Code](https://img.shields.io/github/stars/misaka12435/ToLo?style=social&label=Star)\n\n[arxiv 2025.03] EasyControl: Adding Efficient and Flexible Control for Diffusion Transformer  [[PDF](https://arxiv.org/pdf/2503.07027),[Page](https://github.com/Xiaojiu-z/EasyControl)] ![Code](https://img.shields.io/github/stars/Xiaojiu-z/EasyControl?style=social&label=Star)\n\n[arxiv 2025.03]  PixelPonder: Dynamic Patch Adaptation for Enhanced Multi-Conditional Text-to-Image Generation [[PDF](https://arxiv.org/abs/2503.06684),[Page](https://hithqd.github.io/projects/PixelPonder/)] ![Code](https://img.shields.io/github/stars/chfyfr/PixelPonder?style=social&label=Star)\n\n[arxiv 2025.03]  Adding Additional Control to One-Step Diffusion with Joint Distribution Matching [[PDF](https://arxiv.org/pdf/2503.06652)]\n\n[arxiv 2025.03] OminiControl2: Efficient Conditioning for Diffusion Transformers  [[PDF](https://arxiv.org/pdf/2503.08280),[Page](https://github.com/Yuanshi9815/OminiControl)] ![Code](https://img.shields.io/github/stars/Yuanshi9815/OminiControl?style=social&label=Star)\n\n[arxiv 2025.03]  STAY Diffusion: Styled Layout Diffusion Model for Diverse Layout-to-Image Generation[[PDF](https://arxiv.org/abs/2503.12213)]\n\n[arxiv 2025.03] Contrastive Learning Guided Latent Diffusion Model for Image-to-Image Translation  [[PDF](https://arxiv.org/abs/2503.20484)]\n\n[arxiv 2025.03]  Efficient Multi-Instance Generation with Janus-Pro-Dirven Prompt Parsing [[PDF](https://arxiv.org/html/2503.21069v1)]\n\n[arxiv 2025.03] ORIGEN: Zero-Shot 3D Orientation Grounding in Text-to-Image Generation  [[PDF](https://arxiv.org/abs/2503.22194),[Page](https://origen2025.github.io/)] \n\n[arxiv 2025.04] DreamFuse: Adaptive Image Fusion with Diffusion Transformer  [[PDF](https://arxiv.org/abs/2504.08291),[Page](https://ll3rd.github.io/DreamFuse/)] \n\n[arxiv 2025.04]  Insert Anything: Image Insertion via In-Context Editing in DiT [[PDF](https://arxiv.org/abs/2504.15009),[Page](https://song-wensong.github.io/insert-anything/)] ![Code](https://img.shields.io/github/stars/song-wensong/insert-anything?style=social&label=Star)\n\n[arxiv 2025.05] InstanceGen: Image Generation with Instance-level Instructions  [[PDF](https://arxiv.org/abs/2505.05678),[Page](https://tau-vailab.github.io/InstanceGen/)] \n\n[arxiv 2025.06]  Controllable 3D Placement of Objects with Scene-Aware Diffusion Models [[PDF](https://arxiv.org/pdf/2506.21446)]\n\n[arxiv 2025.06] Rethink Sparse Signals for Pose-guided Text-to-image Generation  [[PDF](https://arxiv.org/abs/2506.20983),[Page](https://github.com/DREAMXFAR/SP-Ctrl)] ![Code](https://img.shields.io/github/stars/DREAMXFAR/SP-Ctrl?style=social&label=Star)\n\n[arxiv 2025.07] MTADiffusion: Mask Text Alignment Diffusion Model for Object Inpainting  [[PDF](https://arxiv.org/abs/2506.23482)]\n\n[arxiv 2025.07] Preserve Anything: Controllable Image Synthesis with Object Preservation  [[PDF](https://arxiv.org/abs/2506.22531)]\n\n[arxiv 2025.07] RichControl: Structure- and Appearance-Rich Training-Free Spatial Control for Text-to-Image Generation  [[PDF](https://arxiv.org/pdf/2507.02792),[Page](https://zhang-liheng.github.io/rich-control/)] ![Code](https://img.shields.io/github/stars/zhang-liheng/RichControl?style=social&label=Star)\n\n[arxiv 2025.07] HOComp: Interaction-Aware Human-Object Composition  [[PDF](https://arxiv.org/abs/2507.16813),[Page](https://dliang293.github.io/HOComp-project/)] ![Code](https://img.shields.io/github/stars/dliang293/HOComp?style=social&label=Star)\n\n[arxiv 2025.07] LAMIC: Layout-Aware Multi-Image Composition via Scalability of Multimodal Diffusion Transformer  [[PDF](https://arxiv.org/abs/2508.00477),[Page](https://github.com/Suchenl/LAMIC)] ![Code](https://img.shields.io/github/stars/Suchenl/LAMIC?style=social&label=Star)\n\n[arxiv 2025.08] LaRender: Training-Free Occlusion Control in Image Generation via Latent Rendering  [[PDF](https://arxiv.org/pdf/2508.07647),[Page](https://xiaohangzhan.github.io/projects/larender/)] ![Code](https://img.shields.io/github/stars/XiaohangZhan/LaRender?style=social&label=Star)\n\n[arxiv 2025.08] CharacterShot: Controllable and Consistent 4D Character Animation  [[PDF](https://arxiv.org/abs/2508.07409),[Page](https://github.com/Jeoyal/CharacterShot)] ![Code](https://img.shields.io/github/stars/Jeoyal/CharacterShot?style=social&label=Star)\n\n[arxiv 2025.08] Lay2Story: Extending Diffusion Transformers for Layout-Togglable Story Generation  [[PDF](https://arxiv.org/pdf/2508.08949)]\n\n[arxiv 2025.08] MUSE: Multi-Subject Unified Synthesis via Explicit Layout Semantic Expansion  [[PDF](https://arxiv.org/pdf/2508.14440),[Page](https://github.com/pf0607/MUSE)] ![Code](https://img.shields.io/github/stars/pf0607/MUSE?style=social&label=Star)\n\n[arxiv 2025.09] Layout-Conditioned Autoregressive Text-to-Image Generation via Structured Masking  [[PDF](https://arxiv.org/abs/2509.12046)]\n\n[arxiv 2025.09] ComposeMe: Attribute-Specific Image Prompts for Controllable Human Image Generation [[PDF](https://arxiv.org/abs/2509.18092),[Page](https://fictionarry.github.io/GeoSVR-project/)] \n\n[arxiv 2025.10] Stitch: Training-Free Position Control in Multimodal Diffusion Transformers  [[PDF](https://arxiv.org/abs/2509.26644),[Page](https://github.com/ExplainableML/Stitch)] ![Code](https://img.shields.io/github/stars/ExplainableML/Stitch?style=social&label=Star)\n\n[arxiv 2025.10]  ScaleWeaver: Weaving Efficient Controllable T2I Generation with Multi-Scale Reference Attention [[PDF](https://arxiv.org/abs/2510.14882)]\n\n[arxiv 2025.10] Chimera: Compositional Image Generation using Part-based Concepting  [[PDF](https://arxiv.org/abs/2510.18083),[Page](https://chimera-compositional-image-generation.vercel.app/)] ![Code](https://img.shields.io/github/stars/shivamsingh-gpu/Chimera?style=social&label=Star)\n\n[arxiv 2025.10] LayerComposer: Interactive Personalized T2I via Spatially-Aware Layered Canvas  [[PDF](https://arxiv.org/abs/2510.20820),[Page](https://snap-research.github.io/layercomposer/)] \n\n[arxiv 2025.10] Sketch-to-Layout: Sketch-Guided Multimodal Layout Generation  [[PDF](https://arxiv.org/abs/2510.27632),[Page](https://github.com/google-deepmind/sketch_to_layout)] ![Code](https://img.shields.io/github/stars/google-deepmind/sketch_to_layout?style=social&label=Star)\n\n[arxiv 2025.11]  BideDPO: Conditional Image Generation with Simultaneous Text and Condition Alignment [[PDF](https://arxiv.org/abs/2511.19268),[Page](https://limuloo.github.io/BideDPO/)] ![Code](https://img.shields.io/github/stars/limuloo/BideDPO?style=social&label=Star)\n\n[arxiv 2025.11] Synthetic Curriculum Reinforces Compositional Text-to-Image Generation  [[PDF](https://arxiv.org/abs/2511.18378)]\n\n[arxiv 2025.11] ConsistCompose: Unified Multimodal Layout Control for Image Composition  [[PDF](https://arxiv.org/pdf/2511.18333)]\n\n[arxiv 2025.11] Canvas-to-Image: Compositional Image Generation with Multimodal Controls  [[PDF](https://arxiv.org/abs/2511.21691),[Page](https://snap-research.github.io/canvas-to-image/)] \n\n[arxiv 2026.02]  SketchingReality: From Freehand Scene Sketches To Photorealistic Images [[PDF](https://arxiv.org/abs/2602.14648),[Page](https://ahmedbourouis.github.io/SketchingReality_ICLR26/)] \n\n[arxiv 2026.03] HiFi-Inpaint: Towards High-Fidelity Reference-Based Inpainting for Generating Detail-Preserving Human-Product Images  [[PDF](https://arxiv.org/pdf/2603.02210),[Page](https://correr-zhou.github.io/HiFi-Inpaint/)] ![Code](https://img.shields.io/github/stars/Correr-Zhou/HiFi-Inpaint?style=social&label=Star)\n\n[arxiv 2026.03] CyCLeGen: Cycle-Consistent Layout Prediction and Image Generation in Vision Foundation Models  [[PDF](https://arxiv.org/abs/2603.14957)]\n\n[arxiv 2026.03] NumColor: Precise Numeric Color Control in Text-to-Image Generation  [[PDF](https://arxiv.org/abs/2603.13547)]\n\n[arxiv 2026.03]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)\n\n\n# end of composition\n\n## Image Variation \n[arxiv 2023.08]IP-Adapter: Text Compatible Image Prompt Adapter for Text-to-Image Diffusion Models [[PDF](https://arxiv.org/pdf/2308.06721.pdf), [Page](https://ip-adapter.github.io/)]\n\n\n\n## Super-Resolution & restoration & Higher-resolution generation\n[arxiv 2022.12]ADIR: Adaptive Diffusion for Image Reconstruction  \\[[PDF](https://shadyabh.github.io/ADIR/ADIR_files/ADIR.pdf)\n\n[arxiv 2023.03]Denoising Diffusion Probabilistic Models for Robust Image Super-Resolution in the Wild [[PDF](https://arxiv.org/abs/2302.07864)]\n\n[arxiv 2023.03]TextIR: A Simple Framework for Text-based Editable Image Restoration [[PDF](https://arxiv.org/abs/2302.14736)]\n\n[arxiv 2023.03]Unlimited-Size Diffusion Restoration [[PDF](https://arxiv.org/abs/2303.00354), [code](https://github.com/wyhuai/DDNM/tree/main/hq_demo)]\n\n[arxiv 2023.03]DiffIR: Efficient Diffusion Model for Image Restoration [[PDF](https://arxiv.org/abs/2303.09472)]\n\n[arxiv 2023.03]Inversion by Direct Iteration: An Alternative to Denoising Diffusion for Image Restoration [[PDF](https://arxiv.org/abs/2303.11435)]\n\n[arxiv 2023.03]Implicit Diffusion Models for Continuous Super-Resolution [[PDF](https://arxiv.org/abs/2303.16491)]\n\n[arxiv 2023.05]UDPM: Upsampling Diffusion Probabilistic Models [[PDF](https://arxiv.org/abs/2305.16269)]\n\n[arxiv 2023.06]Image Harmonization with Diffusion Model [[PDF](https://arxiv.org/abs/2306.10441)]\n\n[arxiv 2023.06]PartDiff: Image Super-resolution with Partial Diffusion Models [[PDF](https://arxiv.org/abs/2307.11926)]\n\n[arxiv 2023.08]Patched Denoising Diffusion Models For High-Resolution Image Synthesis [[PDF](https://arxiv.org/abs/2308.01316)]\n\n[arxiv 2023.08]DiffBIR: Towards Blind Image Restoration with Generative Diffusion Prior [[PDF](https://arxiv.org/abs/2308.15070),[Page](https://github.com/XPixelGroup/DiffBIR?ref=aiartweekly)]\n\n[arxiv 2023.10] ScaleCrafter: Tuning-free Higher-Resolution Visual Generation with Diffusion Models [[PDF](https://arxiv.org/abs/2310.07702), [Page](https://yingqinghe.github.io/scalecrafter/)]\n\n[arxiv 2023.11]Image Super-Resolution with Text Prompt Diffusion [[PDF](https://arxiv.org/abs/2311.14282),[Page](https://github.com/zhengchen1999/PromptSR)]\n\n[arxiv 2023.11]SinSR: Diffusion-Based Image Super-Resolution in a Single Step [[PDF](https://arxiv.org/abs/2311.14760)]\n\n[arxiv 2023.11]SeeSR: Towards Semantics-Aware Real-World Image Super-Resolution [[PDF](https://arxiv.org/abs/2311.16518)]\n\n[arxiv 2023.11]LFSRDiff: Light Field Image Super-Resolution via Diffusion Models [[PDF](https://arxiv.org/abs/2311.16517)]\n\n[arxiv 2023.12]ElasticDiffusion: Training-free Arbitrary Size Image Generation [[PDF](https://arxiv.org/abs/2311.18822),[Code](https://github.com/MoayedHajiAli/ElasticDiffusion-official.git)]\n\n[arxiv 2023.12]UIEDP:Underwater Image Enhancement with Diffusion Prior [[PDF](https://arxiv.org/abs/2312.06240)]\n\n[arxiv 2023.12]MagicScroll: Nontypical Aspect-Ratio Image Generation for Visual Storytelling via Multi-Layered Semantic-Aware Denoisin [[PDF](https://arxiv.org/abs/2312.10899),[Page](https://magicscroll.github.io/)]\n\n[arxiv 2024.01]Diffusion Models, Image Super-Resolution And Everything: A Survey\n\n[arxiv 2024.01]Improving the Stability of Diffusion Models for Content Consistent Super-Resolution [[PDF](https://arxiv.org/abs/2401.00877)]\n\n[arxiv 2024.01]Scaling Up to Excellence: Practicing Model Scaling for Photo-Realistic Image Restoration In the Wild [[PDF](https://arxiv.org/abs/2401.13627), [Page](https://supir.xpixel.group/)]\n\n[arxiv 2024.1]Spatial-and-Frequency-aware Restoration method for Images based on Diffusion Models [[PDF](https://arxiv.org/abs/2401.17629)]\n\n[arxiv 2024.2]You Only Need One Step: Fast Super-Resolution with Stable Diffusion via Scale Distillation [[PDF](https://arxiv.org/abs/2401.17258)]\n\n[arxiv 2024.02]Make a Cheap Scaling : A Self-Cascade Diffusion Model for Higher-Resolution Adaptation[[PDF](https://arxiv.org/abs/2402.10491),[Page](https://guolanqing.github.io/Self-Cascade/)]\n\n[arxiv 2024.02]SAM-DiffSR: Structure-Modulated Diffusion Model for Image Super-Resolution [[PDF](https://arxiv.org/abs/2402.17133)]\n\n[arxiv 2024.03]ResAdapter: Domain Consistent Resolution Adapter for Diffusion Models [[PDF](https://arxiv.org/abs/2403.02084), [PDF](https://res-adapter.github.io/)]\n\n[arxiv 2024.03]XPSR: Cross-modal Priors for Diffusion-based Image Super-Resolution [[PDF](https://arxiv.org/abs/2403.05049)]\n\n[arxiv 2024.03]BlindDiff: Empowering Degradation Modelling in Diffusion Models for Blind Image Super-Resolution [[PDF](https://arxiv.org/abs/2403.10211)]\n\n[arxiv 2024.04]Upsample Guidance: Scale Up Diffusion Models without Training [[PDF](https://arxiv.org/abs/2404.01709)]\n\n[arxiv 2024.04]DeeDSR: Towards Real-World Image Super-Resolution via Degradation-Aware Stable Diffusion [[PDF](https://arxiv.org/abs/2404.00661)]\n\n[arxiv 2024.04]BeyondScene: Higher-Resolution Human-Centric Scene Generation With Pretrained Diffusion [[PDF](https://arxiv.org/abs/2404.04544), [Page](https://janeyeon.github.io/beyond-scene)]\n\n\n[arxiv 2024.05]CDFormer:When Degradation Prediction Embraces Diffusion Model for Blind Image Super-Resolution [[PDF](https://arxiv.org/abs/2405.07648)]\n\n[arxiv 2024.05]Frequency-Domain Refinement with Multiscale Diffusion for Super Resolution [[PDF](https://arxiv.org/abs/2405.10014)]\n\n[arxiv 2024.05] PatchScaler: An Efficient Patch-independent Diffusion Model for Super-Resolution  [[PDF](https://arxiv.org/abs/2405.17158), [Page](https://github.com/yongliuy/PatchScaler)]\n\n[arxiv 2024.05]Blind Image Restoration via Fast Diffusion Inversion[[PDF](https://arxiv.org/abs/2405.19572),[Page](https://github.com/hamadichihaoui/BIRD)]\n\n[arxiv 2024.06]  FlowIE: Efficient Image Enhancement via Rectified Flow [[PDF](https://arxiv.org/abs/2406.00508),[Page](https://github.com/EternalEvan/FlowIE)]\n\n[arxiv 2024.06]  Hierarchical Patch Diffusion Models for High-Resolution Video Generation [[PDF](https://arxiv.org/abs/2406.07792)]\n\n[arxiv 2024.06] Is One GPU Enough? Pushing Image Generation at Higher-Resolutions with Foundation Models [[PDF](https://arxiv.org/abs/2406.07251),[Page](https://github.com/Thanos-DB/Pixelsmith)]\n\n[arxiv 2024.06]  Towards Realistic Data Generation for Real-World Super-Resolution[[PDF](https://arxiv.org/abs/2406.07255)]\n\n\n[arxiv 2024.06] LFMamba: Light Field Image Super-Resolution with State Space Model [[PDF](https://arxiv.org/abs/2406.12463)]\n\n[arxiv 2024.06] ResMaster: Mastering High-Resolution Image Generation via Structural and Fine-Grained Guidance [[PDF](https://arxiv.org/abs/2406.16476),[Page](https://shuweis.github.io/ResMaster/)]\n\n[arxiv 2024.07] Layered Diffusion Model for One-Shot High Resolution Text-to-Image Synthesis[[PDF](https://arxiv.org/abs/2407.06079)]\n\n[arxiv 2024.07] LightenDiffusion: Unsupervised Low-Light Image Enhancement with Latent-Retinex Diffusion Models  [[PDF](https://arxiv.org/abs/2407.08939),[Page](https://github.com/JianghaiSCU/LightenDiffusion)]\n\n[arxiv 2024.07] AccDiffusion: An Accurate Method for Higher-Resolution Image Generation[[PDF](https://arxiv.org/abs/2407.10738),[Page](https://github.com/lzhxmu/AccDiffusion)]\n\n[arxiv 2024.07]  ∞ -Brush: Controllable Large Image Synthesis with Diffusion Models in Infinite Dimensions [[PDF](https://arxiv.org/abs/2407.14709)]\n\n[arxiv 2024.08] MegaFusion: Extend Diffusion Models towards Higher-resolution Image Generation without Further Tuning  [[PDF](https://arxiv.org/abs/2408.11001),[Page](https://haoningwu3639.github.io/MegaFusion/)]\n\n[arxiv 2024.09] HiPrompt: Tuning-free Higher-Resolution Generation with Hierarchical MLLM Prompts [[PDF](https://arxiv.org/abs/2409.02919),[Page](https://liuxinyv.github.io/HiPrompt/)]\n\n[arxiv 2024.09] FreeEnhance: Tuning-Free Image Enhancement via Content-Consistent Noising-and-Denoising Process  [[PDF](https://arxiv.org/abs/2409.07451),[Page]()]\n\n[arxiv 2024.09] Taming Diffusion Prior for Image Super-Resolution with Domain Shift SDEs  [[PDF](https://arxiv.org/abs/2409.17778),[Page](https://github.com/QinpengCui/DoSSR)]\n\n[arxiv 2024.09] Degradation-Guided One-Step Image Super-Resolution with Diffusion Priors  [[PDF](https://arxiv.org/abs/2409.17058),[Page](https://github.com/ArcticHare105/S3Diff)]\n\n[arxiv 2024.09] BurstM: Deep Burst Multi-scale SR using Fourier Space with Optical Flow  [[PDF](https://arxiv.org/abs/2409.15384)]\n\n[arxiv 2024.10]  Distillation-Free One-Step Diffusion for Real-World Image Super-Resolution [[PDF](https://arxiv.org/abs/2410.04224),[Page](https://github.com/JianzeLi-114/DFOSD)]\n\n[arxiv 2024.10] AP-LDM: Attentive and Progressive Latent Diffusion Model for Training-Free High-Resolution Image Generation  [[PDF](https://arxiv.org/abs/2410.06055),[Page](https://github.com/kmittle/AP-LDM)]\n\n[arxiv 2024.10] Deep Compression Autoencoder for Efficient High-Resolution Diffusion Models  [[PDF](https://arxiv.org/abs/2410.10733),[Page](https://github.com/mit-han-lab/efficientvit)]\n\n[arxiv 2024.10] Hi-Mamba: Hierarchical Mamba for Efficient Image Super-Resolution  [[PDF](),[Page]()]\n\n[arxiv 2024.10] ConsisSR: Delving Deep into Consistency in Diffusion-based Image Super-Resolution  [[PDF](https://arxiv.org/abs/2410.13807)]\n\n[arxiv 2024.10] ClearSR: Latent Low-Resolution Image Embeddings Help Diffusion-Based Real-World Super Resolution Models See Clearer  [[PDF](https://arxiv.org/abs/2410.14279)\n\n[arxiv 2024.10]  Multi-Scale Diffusion: Enhancing Spatial Layout in High-Resolution Panoramic Image Generation [[PDF](https://arxiv.org/abs/2410.18830)]\n\n[arxiv 2024.10] LoRA-IR: Taming Low-Rank Experts for Efficient All-in-One Image Restoration  [[PDF](https://arxiv.org/abs/2410.15385)]\n\n[arxiv 2024.10] FlowDCN: Exploring DCN-like Architectures for Fast Image Generation with Arbitrary Resolution  [[PDF](https://arxiv.org/abs/2410.22655)]\n\n[arxiv 2024.11] InstantIR: Blind Image Restoration with Instant Generative Reference  [[PDF](https://arxiv.org/abs/2410.06551),[Page](https://jy-joy.github.io/InstantIR/)]\n\n[arxiv 2024.11] MureObjectStitch: Multi-reference Image Composition  [[PDF](https://arxiv.org/abs/2411.07462),[Page](https://github.com/bcmi/MureObjectStitch-Image-Composition)]\n\n[arxiv 2024.11] DR-BFR: Degradation Representation with Diffusion Models for Blind Face Restoration  [[PDF](https://arxiv.org/abs/2411.10508)]\n\n[arxiv 2024.11] S3 Mamba: Arbitrary-Scale Super-Resolution via Scaleable State Space Model  [[PDF](https://arxiv.org/abs/2411.11906),[Page](https://github.com/xiapeizhe12138/S3Mamba-ArbSR)]\n\n[arxiv 2024.11]  HF-Diff: High-Frequency Perceptual Loss and Distribution Matching for One-Step Diffusion-Based Image Super-Resolution [[PDF](https://arxiv.org/abs/2411.13548),[Page](https://github.com/shoaib-sami/HF-Diff)]\n\n[arxiv 2024.11]  GenDeg: Diffusion-Based Degradation Synthesis for Generalizable All-in-One Image Restoration [[PDF](https://arxiv.org/abs/2411.17687),[Page](https://sudraj2002.github.io/gendegpage/)] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)\n\n[arxiv 2024.11]  PassionSR: Post-Training Quantization with Adaptive Scale in One-Step Diffusion based Image Super-Resolution [[PDF](https://arxiv.org/abs/2411.17106),[Page](https://github.com/libozhu03/PassionSR)] ![Code](https://img.shields.io/github/stars/libozhu03/PassionSR?style=social&label=Star)\n\n[arxiv 2024.12] HoliSDiP: Image Super-Resolution via Holistic Semantics and Diffusion Prior  [[PDF](https://arxiv.org/abs/2411.18662),[Page](https://liyuantsao.github.io/HoliSDiP/)] ![Code](https://img.shields.io/github/stars/liyuantsao/HoliSDiP?style=social&label=Star)\n\n[arxiv 2024.12] FAM Diffusion: Frequency and Attention Modulation for High-Resolution Image Generation with Stable Diffusion\n  [[PDF](https://arxiv.org/abs/2411.18552)] \n\n[arxiv 2024.12]  TSD-SR: One-Step Diffusion with Target Score Distillation for Real-World Image Super-Resolution [[PDF](https://arxiv.org/abs/2411.18263)] \n\n[arxiv 2024.12] AccDiffusion v2: Towards More Accurate Higher-Resolution Diffusion Extrapolation  [[PDF](https://arxiv.org/abs/2412.02099),[Page](https://github.com/lzhxmu/AccDiffusion_v2)] ![Code](https://img.shields.io/github/stars/lzhxmu/AccDiffusion_v2?style=social&label=Star)\n\n[arxiv 2024.12]  HIIF: Hierarchical Encoding based Implicit Image Function for Continuou Super-resolution [[PDF](https://arxiv.org/pdf/2412.03748)]\n\n[arxiv 2024.12] TASR: Timestep-Aware Diffusion Model for Image Super-Resolution  [[PDF](https://arxiv.org/abs/2412.03355),[Page](https://github.com/SleepyLin/TASR)] ![Code](https://img.shields.io/github/stars/SleepyLin/TASR?style=social&label=Star)\n\n[arxiv 2024.12] Pixel-level and Semantic-level Adjustable Super-resolution: A Dual-LoRA Approach  [[PDF](https://arxiv.org/pdf/2412.03017),[Page](https://github.com/csslc/PiSA-SR)] ![Code](https://img.shields.io/github/stars/csslc/PiSA-SR?style=social&label=Star)\n\n[arxiv 2024.12] Semantic Segmentation Prior for Diffusion-Based Real-World Super-Resolution  [[PDF](https://arxiv.org/pdf/2412.02960)]\n\n[arxiv 2024.12]  ReF-LDM: A Latent Diffusion Model for Reference-based Face Image Restoration [[PDF](https://arxiv.org/abs/2412.05043),[Page](https://chiweihsiao.github.io/refldm.github.io/)] ![Code](https://img.shields.io/github/stars/ChiWeiHsiao/ref-ldm?style=social&label=Star)\n\n[arxiv 2024.12]  Hero-SR: One-Step Diffusion for Super-Resolution with Human Perception Priors [[PDF](https://arxiv.org/abs/2412.07152),[Page](https://github.com/W-JG/Hero-SR)] ![Code](https://img.shields.io/github/stars/W-JG/Hero-SR?style=social&label=Star)\n\n[arxiv 2024.12] RAP-SR: RestorAtion Prior Enhancement in Diffusion Models for Realistic Image Super-Resolution  [[PDF](https://arxiv.org/abs/2412.07149),[Page](https://github.com/W-JG/RAP-SR)] ![Code](https://img.shields.io/github/stars/W-JG/RAP-SR?style=social&label=Star)\n\n[arxiv 2024.12]  FreeScale: Unleashing the Resolution of Diffusion Models via Tuning-Free Scale Fusion [[PDF](https://arxiv.org/abs/2412.09626),[Page](http://haonanqiu.com/projects/FreeScale.html)] \n\n\n[arxiv 2024.12]  Arbitrary-steps Image Super-resolution via Diffusion Inversion [[PDF](),[Page](https://github.com/zsyOAOA/InvSR)] ![Code](https://img.shields.io/github/stars/zsyOAOA/InvSR?style=social&label=Star)\n\n[arxiv 2024.12]  Tiled Diffusion [[PDF](https://arxiv.org/abs/2412.15185),[Page](https://madaror.github.io/tiled-diffusion.github.io/)] ![Code](https://img.shields.io/github/stars/madaror/tiled-diffusion?style=social&label=Star)\n\n[arxiv 2025.01] Varformer: Adapting VAR’s Generative Prior for Image Restoration  [[PDF](https://github.com/siywang541/Varformer),[Page](https://github.com/siywang541/Varformer)] ![Code](https://img.shields.io/github/stars/siywang541/Varformer?style=social&label=Star)\n\n[arxiv 2025.02] One Diffusion Step to Real-World Super-Resolution via Flow Trajectory Distillation  [[PDF](https://arxiv.org/abs/2502.01993),[Page](https://github.com/JianzeLi-114/FluxSR)] ![Code](https://img.shields.io/github/stars/JianzeLi-114/FluxSR?style=social&label=Star)\n\n[arxiv 2025.02] DeblurDiff: Real-World Image Deblurring with Generative Diffusion Models  [[PDF](https://arxiv.org/abs/2502.03810),[Page](https://github.com/kkkls/DeblurDiff)] ![Code](https://img.shields.io/github/stars/kkkls/DeblurDiff?style=social&label=Star)\n\n[arxiv 2025.02]  CondiQuant: Condition Number Based Low-Bit Quantization for Image Super-Resolution [[PDF](https://arxiv.org/pdf/2502.15478),[Page](https://github.com/Kai-Liu001/CondiQuant)] ![Code](https://img.shields.io/github/stars/Kai-Liu001/CondiQuant?style=social&label=Star)\n\n[arxiv 2025.03] Diffusion Restoration Adapter for Real-World Image Restoration  [[PDF](https://arxiv.org/abs/2502.20679)]\n\n[arxiv 2025.03] DifIISR: A Diffusion Model with Gradient Guidance for Infrared Image Super-Resolution  [[PDF](https://arxiv.org/abs/2503.01187),[Page](https://github.com/zirui0625/DifIISR)] ![Code](https://img.shields.io/github/stars/zirui0625/DifIISR?style=social&label=Star)\n\n[arxiv 2025.03] QArtSR: Quantization via Reverse-Module and Timestep-Retraining in One-Step Diffusion based Image Super-Resolution  [[PDF](https://arxiv.org/abs/2503.05584),[Page](https://github.com/libozhu03/QArtSR)] ![Code](https://img.shields.io/github/stars/libozhu03/QArtSR?style=social&label=Star)\n\n[arxiv 2025.03] CATANet: Efficient Content-Aware Token Aggregation for Lightweight Image Super-Resolution  [[PDF](https://arxiv.org/abs/2503.06896),[Page](https://github.com/EquationWalker/CATANet)] ![Code](https://img.shields.io/github/stars/EquationWalker/CATANet?style=social&label=Star)\n\n[arxiv 2025.03]  One-Step Diffusion Model for Image Motion-Deblurring [[PDF](https://arxiv.org/abs/2503.06537),[Page](https://github.com/xyLiu339/OSDD)] ![Code](https://img.shields.io/github/stars/xyLiu339/OSDD?style=social&label=Star)\n\n[arxiv 2025.03] One-Step Residual Shifting Diffusion for Image Super-Resolution via Distillation [[PDF](https://arxiv.org/abs/2503.13358)]\n\n[arxiv 2025.03] The Power of Context: How Multimodality Improves Image Super-Resolution  [[PDF](https://arxiv.org/abs/2503.14503)]\n\n[arxiv 2025.03]  CTSR: Controllable Fidelity-Realness Trade-off Distillation for Real-World Image Super Resolution [[PDF](https://arxiv.org/pdf/2503.14272),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)\n\n[arxiv 2025.03] Dereflection Any Image with Diffusion Priors and Diversified Data  [[PDF](https://abuuu122.github.io/DAI.github.io/),[Page](https://abuuu122.github.io/DAI.github.io/)] ![Code](https://img.shields.io/github/stars/Abuuu122/Dereflection-Any-Image?style=social&label=Star)\n\n[arxiv 2025.03] Consistency Trajectory Matching for One-Step Generative Super-Resolution  [[PDF](https://arxiv.org/abs/2503.20349)]\n\n[arxiv 2025.03] Progressive Focused Transformer for Single Image Super-Resolution  [[PDF](https://arxiv.org/abs/2503.20337)]\n\n[arxiv 2025.03] Diffusion Image Prior  [[PDF](https://arxiv.org/abs/2503.21410)]\n\n[arxiv 2025.04] DiT4SR: Taming Diffusion Transformer for Real-World Image Super-Resolution  [[PDF](https://arxiv.org/abs/2503.23580)]\n\n[arxiv 2025.04]  HiFlow: Training-free High-Resolution Image Generation with Flow-Aligned Guidance [[PDF](https://arxiv.org/abs/2504.06232),[Page](https://github.com/Bujiazi/HiFlow)] ![Code](https://img.shields.io/github/stars/Bujiazi/HiFlow?style=social&label=Star)\n\n[arxiv 2025.04]  ZipIR: Latent Pyramid Diffusion Transformer for High-Resolution Image Restoration [[PDF](https://arxiv.org/pdf/2504.08591)]\n\n[arxiv 2025.04]  Crafting Query-Aware Selective Attention for Single Image Super-Resolution [[PDF](https://arxiv.org/abs/2504.06634)]\n\n[arxiv 2025.04]  Enhanced Semantic Extraction and Guidance for UGC Image Super Resolution [[PDF](https://arxiv.org/abs/2504.09887),[Page](https://github.com/Moonsofang/NTIRE-2025-SRlab)] ![Code](https://img.shields.io/github/stars/Moonsofang/NTIRE-2025-SRlab?style=social&label=Star)\n\n[arxiv 2025.04]  InstaRevive: One-Step Image Enhancement via Dynamic Score Matching [[PDF](https://arxiv.org/abs/2504.15513),[Page](https://github.com/EternalEvan/InstaRevive)] ![Code](https://img.shields.io/github/stars/EternalEvan/InstaRevive?style=social&label=Star)\n\n[arxiv 2025.04] DSPO: Direct Semantic Preference Optimization for Real-World Image Super-Resolution  [[PDF](https://arxiv.org/abs/2504.15176)]\n\n[arxiv 2025.04] Acquire and then Adapt: Squeezing out Text-to-Image Model for Image Restoration  [[PDF](https://arxiv.org/abs/2504.15159)]\n\n[arxiv 2025.04]  Dual Prompting Image Restoration with Diffusion Transformers [[PDF](https://arxiv.org/pdf/2504.17825)]\n\n[arxiv 2025.05] GuideSR: Rethinking Guidance for One-Step High-Fidelity Diffusion-Based Super-Resolution  [[PDF](https://arxiv.org/abs/2505.00687)]\n\n[arxiv 2025.05]  EAM: Enhancing Anything with Diffusion Transformers for Blind Super-Resolution [[PDF](https://arxiv.org/pdf/2505.05209)]\n\n[arxiv 2025.05] RestoreVAR: Visual Autoregressive Generation for All-in-One Image Restoration  [[PDF](https://arxiv.org/abs/2505.18047),[Page](https://sudraj2002.github.io/restorevarpage/)] ![Code](https://img.shields.io/github/stars/sudraj2002/RestoreVAR?style=social&label=Star)\n\n[arxiv 2025.06]  Controlled Data Rebalancing in Multi-Task Learning for Real-World Image Super-Resolution [[PDF](https://arxiv.org/pdf/2506.05607)]\n\n[arxiv 2025.06]  Text-Aware Image Restoration with Diffusion Models [[PDF](https://arxiv.org/abs/2506.09993),[Page](https://cvlab-kaist.github.io/TAIR/)] ![Code](https://img.shields.io/github/stars/cvlab-kaist/TAIR?style=social&label=Star)\n\n[arxiv 2025.06] Reversing Flow for Image Restoration  [[PDF](https://arxiv.org/abs/2506.16961)]\n\n[arxiv 2025.06] Visual-Instructed Degradation Diffusion for All-in-One Image Restoration  [[PDF](https://arxiv.org/pdf/2506.16960)]\n\n[arxiv 2025.06]  RealSR-R1: Reinforcement Learning for Real-World Image Super-Resolution with Vision-Language Chain-of-Thought [[PDF](https://arxiv.org/pdf/2506.16796),[Page](https://github.com/Junboooo/RealSR-R1)] ![Code](https://img.shields.io/github/stars/Junboooo/RealSR-R1?style=social&label=Star)\n\n[arxiv 2025.06] Diffusion Transformer-to-Mamba Distillation for High-Resolution Image Generation  [[PDF](https://arxiv.org/abs/2506.18999)]\n\n[arxiv 2025.07]  4KAgent: Agentic Any Image to 4K Super-Resolution [[PDF](https://arxiv.org/abs/2507.07105),[Page](https://4kagent.github.io/)] \n\n[arxiv 2025.07] Hallucination Score: Towards Mitigating Hallucinations in Generative Image Super-Resolution  [[PDF](https://arxiv.org/pdf/2507.14367)]\n\n[arxiv 2025.07]  Fine-structure Preserved Real-world Image Super-resolution via Transfer VAE Training [[PDF](https://arxiv.org/abs/2507.20291),[Page](https://github.com/Joyies/TVT)] ![Code](https://img.shields.io/github/stars/Joyies/TVT?style=social&label=Star)\n\n[arxiv 2025.07] APT: Improving Diffusion Models for High Resolution Image Generation with Adaptive Path Tracing  [[PDF](https://arxiv.org/abs/2507.21690)]\n\n[arxiv 2025.08]  OMGSR: You Only Need One Mid-timestep Guidance for Real-World Image Super-Resolution [[PDF](https://arxiv.org/pdf/2508.08227),[Page](https://github.com/wuer5/OMGSR)] ![Code](https://img.shields.io/github/stars/wuer5/OMGSR?style=social&label=Star)\n\n[arxiv 2025.09] InfGen: A Resolution-Agnostic Paradigm for Scalable Image Synthesis  [[PDF](https://arxiv.org/abs/2509.10441)]\n\n[arxiv 2025.09]  Realism Control One-step Diffusion for Real-World Image Super-Resolution [[PDF](https://arxiv.org/abs/2509.10122)]\n\n[arxiv 2025.09] Degradation-Aware All-in-One Image Restoration via Latent Prior Encoding  [[PDF](https://arxiv.org/abs/2509.17792),[Page](https://github.com/sharif-apu/DAIR)] ![Code](https://img.shields.io/github/stars/sharif-apu/DAIR?style=social&label=Star)\n\n[arxiv 2025.10] PocketSR: The Super-Resolution Expert in Your Pocket Mobiles  [[PDF](https://arxiv.org/abs/2510.03012)]\n\n[arxiv 2025.10]  Ultra High-Resolution Image Inpainting with Patch-Based Content Consistency Adapter [[PDF](https://arxiv.org/abs/2510.13419),[Page](https://github.com/Roveer/Patch-Based-Adapter)] ![Code](https://img.shields.io/github/stars/Roveer/Patch-Based-Adapter?style=social&label=Star)\n\n[arxiv 2025.10] Scale-DiT: Ultra-High-Resolution Image Generation with Hierarchical Local Attention  [[PDF](https://arxiv.org/abs/2510.16325)]\n\n[arxiv 2025.10] DP2O-SR: Direct Perceptual Preference Optimization for Real-World Image Super-Resolution  [[PDF](https://arxiv.org/abs/2510.18851),[Page](https://github.com/cswry/DP2O-SR)] ![Code](https://img.shields.io/github/stars/cswry/DP2O-SR?style=social&label=Star)\n\n[arxiv 2025.10] DyPE: Dynamic Position Extrapolation for Ultra High Resolution Diffusion  [[PDF](https://arxiv.org/abs/2510.20766),[Page](https://noamissachar.github.io/DyPE/)] ![Code](https://img.shields.io/github/stars/guyyariv/DyPE?style=social&label=Star)\n\n[arxiv 2025.11]  One Small Step in Latent, One Giant Leap for Pixels: Fast Latent Upscale Adapter for Your Diffusion Models [[PDF](https://arxiv.org/abs/2511.10629)]\n\n[arxiv 2025.11] One-Step Diffusion Transformer for Controllable Real-World Image Super-Resolution  [[PDF](https://arxiv.org/abs/2511.17138)]\n\n[arxiv 2026.01]  From Physical Degradation Models to Task-Aware All-in-One Image Restoration [[PDF](https://arxiv.org/abs/2601.10192)]\n\n[arxiv 2026.02] PixelRush: Ultra-Fast, Training-Free High-Resolution Image Generation via One-step Diffusion  [[PDF](https://arxiv.org/abs/2602.12769)]\n\n[arxiv 2026.03] Joint Geometric and Trajectory Consistency Learning for One-Step Real-World Super-Resolution  [[PDF](https://arxiv.org/abs/2602.24240),[Page](https://github.com/Blazedengcy/GTASR)] ![Code](https://img.shields.io/github/stars/Blazedengcy/GTASR?style=social&label=Star)\n\n[arxiv 2026.03]  AlignVAR: Towards Globally Consistent Visual Autoregression for Image Super-Resolution [[PDF](https://arxiv.org/pdf/2603.00589)]\n\n[arxiv 2026.03]  FiDeSR: High-Fidelity and Detail-Preserving One-Step Diffusion Super-Resolution [[PDF](https://arxiv.org/abs/2603.02692),[Page](https://github.com/Ar0Kim/FiDeSR)] ![Code](https://img.shields.io/github/stars/Ar0Kim/FiDeSR?style=social&label=Star)\n\n[arxiv 2026.03] M2IR: Proactive All-in-One Image Restoration via Mamba-style Modulation and Mixture-of-Experts  [[PDF](https://arxiv.org/abs/2603.14816),[Page](https://github.com/Im34v/M2IR)] ![Code](https://img.shields.io/github/stars/Im34v/M2IR?style=social&label=Star)\n\n[arxiv 2026.03] Revisiting the Perception-Distortion Trade-off with Spatial-Semantic Guided Super-Resolution  [[PDF](https://arxiv.org/abs/2603.14112),[Page](https://hssmac.github.io/SpaSemSR_web/)]\n\n[arxiv 2026.03] GDPO-SR: Group Direct Preference Optimization for One-Step Generative Image Super-Resolution  [[[PDF](https://arxiv.org/abs/2603.16769)]]\n\n[arxiv 2026.03] UCAN: Unified Convolutional Attention Network for Expansive Receptive Fields in Lightweight Super-Resolution  [[PDF](https://arxiv.org/abs/2603.11680)]\n\n[arxiv 2026.03] MeInTime: Bridging Age Gap in Identity-Preserving Face Restoration  [[PDF](https://arxiv.org/abs/2603.18645)] ![Code](https://img.shields.io/github/stars/teer4/MeInTime?style=social&label=Star)\n\n[arxiv 2026.03] RealRestorer: Towards Generalizable Real-World Image Restoration with Large-Scale Image Editing Models  [[PDF](https://arxiv.org/abs/2603.25502),[Page](https://yfyang007.github.io/RealRestorer/)]\n\n[arxiv 2026.03] Restore, Assess, Repeat: A Unified Framework for Iterative Image Restoration  [[PDF](https://arxiv.org/abs/2603.26385),[Page](https://restore-assess-repeat.github.io)]\n\n[arxiv 2026.03] Beyond Ground-Truth: Leveraging Image Quality Priors for Real-World Image Restoration  [[PDF](https://arxiv.org/abs/2603.29773)]\n\n[arxiv 2026.03]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)\n\n\n## translation \n[arxiv 2024.10] CtrLoRA: An Extensible and Efficient Framework for Controllable Image Generation  [[PDF](https://arxiv.org/abs/2410.09400),[Page](https://github.com/xyfJASON/ctrlora)]\n\n[arxiv 2024.11] Large-Scale Text-to-Image Model with Inpainting is a Zero-Shot Subject-Driven Image Generator  [[PDF](https://arxiv.org/abs/2411.15466),[Page](https://diptychprompting.github.io/)] \n\n[arxiv 2025.02] RelaCtrl: Relevance-Guided Efficient Control for Diffusion Transformers  [[PDF](https://arxiv.org/abs/2502.14377),[Page](https://relactrl.github.io/RelaCtrl/)] \n\n\n[arxiv 2026.03]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)\n\n\n\n## action transfer \n[arxiv 2023.11]Learning Disentangled Identifiers for Action-Customized Text-to-Image Generation [[PDF](https://arxiv.org/abs/2311.15841)]\n\n[arxiv 2026.03]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)\n\n## Style transfer \n[arxiv 22.11; kuaishou] ***DiffStyler***: Controllable Dual Diffusion for Text-Driven Image Stylization \\[[PDF](https://arxiv.org/pdf/2211.10682.pdf), code\\]  \n\n[ICLR 23] TEXT-GUIDED DIFFUSION IMAGE STYLE TRANSFER WITH CONTRASTIVE LOSS [[Paper]](https://openreview.net/pdf?id=iJ_E0ZCy8fi)  \n\n[arxiv 22.11; kuaishou&CAS] Inversion-Based Creativity Transfer with Diffusion Models \\[[PDF](https://arxiv.org/pdf/2211.13203.pdf), [Code](https://github.com/zyxElsa/creativity-transfer)\\]\n\n[arxiv 2022.12]Diff-Font: Diffusion Model for Robust One-Shot Font Generation [[PDF](https://arxiv.org/pdf/2212.05895.pdf)]\n\n[arxiv 2023.02]Structure and Content-Guided Video Synthesis with Diffusion Models [[PDF](https://arxiv.org/abs/2302.03011), [Page](https://research.runwayml.com/gen1)]\n\n[arxiv 2023.03]Design Booster: A Text-Guided Diffusion Model for Image Translation with Spatial Layout Preservation [[PDF](https://arxiv.org/abs/2302.02284)]\n\n[arxiv 2023.02]DiffFashion: Reference-based Fashion Design with Structure-aware Transfer by Diffusion Models [[PDF](https://arxiv.org/abs/2302.06826)]\n\n[arxiv 2022.11]Plug-and-Play Diffusion Features for Text-Driven Image-to-Image Translation[[PDF](https://arxiv.org/abs/2211.12572)]\n\n[arxiv 2023.03]StyO: Stylize Your Face in Only One-Shot [[PDF](https://arxiv.org/pdf/2303.03231.pdf)]\n\n[arxiv 2023.03]Zero-Shot Contrastive Loss for Text-Guided Diffusion Image Style Transfer [[PDF](https://arxiv.org/abs/2303.08622)]\n\n[arxiv 2023.04] One-Shot Stylization for Full-Body Human Images [[PDF](One-Shot Stylization for Full-Body Human Images)]\n\n[arxiv 2023.06]StyleDrop: Text-to-Image Generation in Any Style [[PDF](https://arxiv.org/abs/2306.00983), [Page](https://styledrop.github.io/)]\n\n[arxiv 2023.07]General Image-to-Image Translation with One-Shot Image Guidance [[PDF](https://arxiv.org/pdf/2307.14352.pdf)]\n\n[arxiv 2023.08]DiffColor: Toward High Fidelity Text-Guided Image Colorization with Diffusion Models [[PDF](https://arxiv.org/abs/2308.01655)]\n\n[arxiv 2023.08] StyleDiffusion: Controllable Disentangled Style Transfer via Diffusion Models\n[[PDf] (https://arxiv.org/abs/2308.07863)]\n\n[arxiv 2023.08]Scenimefy: Learning to Craft Anime Scene via Semi-Supervised Image-to-Image Translation [[PDF](https://arxiv.org/abs/2308.12968), [Page](https://yuxinn-j.github.io/projects/Scenimefy.html)]\n\n[arxiv 2023.09]StyleAdapter: A Single-Pass LoRA-Free Model for Stylized Image Generation [[PDF](https://arxiv.org/abs/2309.01770)]\n\n[arxiv 2023.09]DreamStyler: Paint by Style Inversion with Text-to-Image Diffusion Models [[PDF](https://arxiv.org/pdf/2309.06933.pdf)]\n\n[arxiv 2023.11]ControlStyle: Text-Driven Stylized Image Generation Using Diffusion Priors[[PDF](https://arxiv.org/abs/2311.05463)]\n\n[arxiv 2023.11]ZipLoRA: Any Subject in Any Style by Effectively Merging LoRAs [[PDF](https://arxiv.org/abs/2311.13600),[Page](https://ziplora.github.io/)]\n\n[arxiv 2023.11]Soulstyler: Using Large Language Model to Guide Image Style Transfer for Target Object [[PDf](https://arxiv.org/abs/2311.13562)]\n\n[arxiv 2023.11]InstaStyle: Inversion Noise of a Stylized Image is Secretly a Style Adviser[[PDF](https://arxiv.org/abs/2311.15040)]\n\n[arxiv 2023.12]Portrait Diffusion: Training-free Face Stylization with Chain-of-Painting [[PDF](https://arxiv.org/abs/2312.02212)]\n\n[arxiv 2023.12]StyleCrafter: Enhancing Stylized Text-to-Video Generation with Style Adapter[[PDF](https://arxiv.org/abs/2312.00330),[Page](https://gongyeliu.github.io/StyleCrafter.github.io/)]\n\n[arxiv 2023.12]Style Injection in Diffusion: A Training-free Approach for Adapting Large-scale Diffusion Models for Style Transfer [[PDF](https://arxiv.org/abs/2312.09008)]\n\n[arxiv 2023.12]Style Aligned Image Generation via Shared Attention [[PDF](https://arxiv.org/abs/2312.02133),[Page](http://style-aligned-gen.github.io/)] \n\n[arxiv 2024.01]FreeStyle : Free Lunch for Text-guided Style Transfer using Diffusion Models [[PDF](https://arxiv.org/abs/2401.15636), [Page](https://freestylefreelunch.github.io/)]\n\n[arxiv 2024.02]Control Color: Multimodal Diffusion-based Interactive Image Colorization [[PDF](https://arxiv.org/abs/2402.10855), [Page](https://zhexinliang.github.io/Control_Color/)]\n\n[arxiv 2024.02]One-Shot Structure-Aware Stylized Image Synthesis [[PDF](https://arxiv.org/abs/2402.17275)]\n\n[arxiv 2024.02]Visual Style Prompting with Swapping Self-Attention [[PDF](https://arxiv.org/abs/2402.12974),[Page](https://curryjung.github.io/VisualStylePrompt/?ref=aiartweekly)]\n\n[arxiv 2024.03]DEADiff: An Efficient Stylization Diffusion Model with Disentangled Representations [[PDF](https://arxiv.org/abs/2403.06951),[Page](https://tianhao-qi.github.io/DEADiff/)]\n\n[arxiv 2024.03]Implicit Style-Content Separation using B-LoRA [[PDF](https://arxiv.org/abs/2403.14572),[Page](https://b-lora.github.io/B-LoRA/)]\n\n[arxiv 2024.03]Break-for-Make: Modular Low-Rank Adaptations for Composable Content-Style Customization [[PDF](https://arxiv.org/abs/2403.19456)]\n\n[arxiv 2024.04]InstantStyle: Free Lunch towards Style-Preserving in Text-to-Image Generation [[PDF](https://arxiv.org/abs/2404.02733)]\n\n[arxiv 2024.04]Tuning-Free Adaptive Style Incorporation for Structure-Consistent Text-Driven Style Transfer [[PDF](https://arxiv.org/abs/2404.06835)]\n\n[arxiv 2024.04]DEADiff: An Efficient Stylization Diffusion Model with Disentangled Representations [[PDF](https://arxiv.org/abs/2403.06951),[Page](https://github.com/bytedance/DEADiff)]\n\n[arxiv 2024.04]Towards Highly Realistic Artistic Style Transfer via Stable Diffusion with Step-aware and Layer-aware Prompt [[PDF](https://arxiv.org/abs/2404.11474)]\n\n[arxiv 2024.04]StyleBooth: Image Style Editing with Multimodal Instruction [[PDF](https://arxiv.org/abs/2404.12154)]\n\n[arxiv 2024.04]FilterPrompt: Guiding Image Transfer in Diffusion Models [[PDF](https://arxiv.org/abs/2404.13263)]\n\n[arxiv 2024.05]FreeTuner: Any Subject in Any Style with Training-free Diffusion[[PDF](https://arxiv.org/abs/2405.14201)]\n\n[arxiv 2024.05] StyleMaster: Towards Flexible Stylized Image Generation with Diffusion Models [[PDF](https://arxiv.org/abs/2405.15287)]\n\n[arxiv 2024.06] Stylebreeder: Exploring and Democratizing Artistic Styles through Text-to-Image Models [[PDF](https://arxiv.org/abs/2406.14599),[Page](https://stylebreeder.github.io/)]\n\n[arxiv 2024.07] StyleShot: A Snapshot on Any Style [[PDF](https://arxiv.org/abs/2407.01414),[Page](https://styleshot.github.io/)]\n\n[arxiv 2024.07] InstantStyle-Plus: Style Transfer with Content-Preserving in Text-to-Image Generation  [[PDF](https://arxiv.org/abs/2407.00788)]\n\n[arxiv 2024.07] Frequency-Controlled Diffusion Model for Versatile Text-Guided Image-to-Image Translation [[PDF](https://arxiv.org/abs/2407.03006),[Page](https://github.com/XiangGao1102/FCDiffusion)]\n\n[arxiv 2024.07] Magic Insert: Style-Aware Drag-and-Drop  [[PDF](https://arxiv.org/abs/2407.02489),[Page](https://magicinsert.github.io/)]\n\n[arxiv 2024.07]Ada-adapter:Fast Few-shot Style Personlization of Diffusion Model with Pre-trained Image Encoder [[PDF](https://arxiv.org/abs/2407.05552)]\n\n[arxiv 2024.07] Artist: Aesthetically Controllable Text-Driven Stylization without Training [[PDF](https://arxiv.org/abs/2303.17606),[Page](https://diffusionartist.github.io/)]\n\n[arxiv 2024.08]  StyleBrush: Style Extraction and Transfer from a Single Image [[PDF](https://arxiv.org/abs/2408.09496)]\n\n[arxiv 2024.08] CSGO: Content-Style Composition in Text-to-Image Generation [[PDF](https://arxiv.org/abs/2408.16766),[Page](https://csgo-gen.github.io/)]\n\n[arxiv 2024.09]StyleTokenizer: Defining Image Style by a Single Instance for Controlling Diffusion Models[[PDF](https://arxiv.org/abs/2409.02543),[Page](https://github.com/alipay/style-tokenizer)]\n\n[arxiv 2024.09]Training-free Color-Style Disentanglement for Constrained Text-to-Image Synthesis[[PDF](https://arxiv.org/abs/2409.02429)]\n\n[arxiv 2024.09] Mamba-ST: State Space Model for Efficient Style Transfer  [[PDF](https://arxiv.org/abs/2409.10385)]\n\n[arxiv 2024.10] Harnessing the Latent Diffusion Model for Training-Free Image Style Transfer  [[PDF](https://arxiv.org/abs/2410.01366)]\n\n[arxiv 2024.11] Style-Friendly SNR Sampler for Style-Driven Generation  [[PDF](https://arxiv.org/pdf/2411.14793)] \n\n[arxiv 2024.12] UnZipLoRA: Separating Content and Style from a Single Image  [[PDF](https://arxiv.org/abs/2412.04465),[Page](https://unziplora.github.io/)] \n\n[arxiv 2024.12] StyleStudio: Text-Driven Style Transfer with Selective Control of Style Elements  [[PDF](https://arxiv.org/abs/2412.08503),[Page](https://stylestudio-official.github.io/)] ![Code](https://img.shields.io/github/stars/Westlake-AGI-Lab/StyleStudio?style=social&label=Star)\n\n[arxiv 2024.12]  OmniPrism: Learning Disentangled Visual Concept for Image Generation [[PDF](https://arxiv.org/abs/2412.12242/),[Page](https://tale17.github.io/omni/)] \n\n[arxiv 2025.01]  Conditional Balance: Improving Multi-Conditioning Trade-Offs in Image Generation [[PDF](https://arxiv.org/pdf/2412.19853)]\n\n[arxiv 2025.01] StyleRWKV: High-Quality and High-Efficiency Style Transfer with RWKV-like Architecture  [[PDF](https://arxiv.org/abs/2412.19535)]\n\n[arxiv 2025.01] Single Trajectory Distillation for Accelerating Image and Video Style Transfer  [[PDF](https://arxiv.org/abs/2412.18945),[Page](https://single-trajectory-distillation.github.io/)] \n\n[arxiv 2025.01]  MangaNinja: Line Art Colorization with Precise Reference Following\n [[PDF](https://arxiv.org/abs/2501.08332),[Page](https://johanan528.github.io/MangaNinjia/)] ![Code](https://img.shields.io/github/stars/ali-vilab/MangaNinjia?style=social&label=Star)\n\n[arxiv 2025.02] MaterialFusion: High-Quality, Zero-Shot, and Controllable Material Transfer with Diffusion Models  [[PDF](https://arxiv.org/abs/2502.06606),[Page](https://github.com/kzGarifullin/MaterialFusion)] ![Code](https://img.shields.io/github/stars/kzGarifullin/MaterialFusion?style=social&label=Star)\n\n[arxiv 2025.02] StyleBlend: Enhancing Style-Specific Content Creation in Text-to-Image Diffusion Models  [[PDF](https://arxiv.org/pdf/2502.09064)]\n\n[arxiv 2025.02] Contextual Gesture: Co-Speech Gesture Video Generation through Context-aware Gesture Representation  [[PDF](https://arxiv.org/pdf/2502.07466),[Page](https://github.com/LinLLLL/MaskST)] ![Code](https://img.shields.io/github/stars/LinLLLL/MaskST?style=social&label=Star)\n\n[arxiv 2025.02] GCC: Generative Color Constancy via Diffusing a Color Checker  [[PDF](https://arxiv.org/abs/2502.17435),[Page](https://chenwei891213.github.io/GCC/)] \n\n[arxiv 2025.02] K-LoRA: Unlocking Training-Free Fusion of Any Subject and Style LoRAs  [[PDF](https://arxiv.org/abs/2502.18461),[Page](https://k-lora.github.io/K-LoRA.io/)] ![Code](https://img.shields.io/github/stars/HVision-NKU/K-LoRA?style=social&label=Star)\n\n[arxiv 2025.02] Attention Distillation: A Unified Approach to Visual Characteristics Transfer  [[PDF](https://arxiv.org/abs/2502.20235),[Page](https://github.com/xugao97/AttentionDistillation)] ![Code](https://img.shields.io/github/stars/xugao97/AttentionDistillation?style=social&label=Star)\n\n[arxiv 2025.03] AttenST: A Training-Free Attention-Driven Style Transfer Framework with Pre-Trained Diffusion Models  [[PDF](https://arxiv.org/pdf/2503.07307)]\n\n[arxiv 2025.03] ConsisLoRA: Enhancing Content and Style Consistency for LoRA-based Style Transfer  [[PDF](https://arxiv.org/pdf/2503.10614),[Page](https://consislora.github.io/)] ![Code](https://img.shields.io/github/stars/000linlin/ConsisLoRA?style=social&label=Star)\n\n[arxiv 2025.03] Pluggable Style Representation Learning for Multi-Style Transfer[[PDF](https://arxiv.org/abs/2503.20368),[Page](https://github.com/SYSU-SAIL/SaMST)] ![Code](https://img.shields.io/github/stars/SYSU-SAIL/SaMST?style=social&label=Star)\n\n[arxiv 2025.03]  Semantix: An Energy Guided Sampler for Semantic Style Transfer [[PDF](https://arxiv.org/abs/2503.22344),[Page](https://huiang-he.github.io/semantix/)] ![Code](https://img.shields.io/github/stars/Huiang-He/Semantix-Styler?style=social&label=Star)\n\n[arxiv 2025.04]  A Training-Free Style-aligned Image Generation with Scale-wise Autoregressive Model[[PDF](https://arxiv.org/abs/2504.06144)]\n\n[arxiv 2025.05] Style Transfer with Diffusion Models for Synthetic-to-Real Domain Adaptation  [[PDF](https://arxiv.org/abs/2505.16360),[Page](https://github.com/echigot/cactif)] ![Code](https://img.shields.io/github/stars/echigot/cactif?style=social&label=Star)\n\n[arxiv 2025.06] Only-Style: Stylistic Consistency in Image Generation without Content Leakage  [[PDF](https://arxiv.org/abs/2506.09916),[Page](https://tilemahosaravanis.github.io/Only-Style-PP/)] ![Code](https://img.shields.io/github/stars/TilemahosAravanis/Only-Style?style=social&label=Star)\n\n[arxiv 2025.06]  SA-LUT: Spatial Adaptive 4D Look-Up Table for Photorealistic Style Transfer [[PDF](https://arxiv.org/pdf/2506.13465),[Page](https://github.com/Ry3nG/SA-LUT)] ![Code](https://img.shields.io/github/stars/Ry3nG/SA-LUT?style=social&label=Star)\n\n[arxiv 2025.07]  Imagine for Me: Creative Conceptual Blending of Real Images and Text via Blended Attention [[PDF](https://arxiv.org/pdf/2506.24085),[Page](https://imagineforme.github.io/)] ![Code](https://img.shields.io/github/stars/WonwoongCho/IT-Blender?style=social&label=Star)\n\n[arxiv 2025.07] Domain Generalizable Portrait Style Transfer  [[PDF](https://arxiv.org/abs/2507.04243),[Page](https://github.com/wangxb29/DGPST)] ![Code](https://img.shields.io/github/stars/wangxb29/DGPST?style=social&label=Star)\n\n[arxiv 2025.08]  SCFlow: Implicitly Learning Style and Content Disentanglement with Flow Models [[PDF](https://arxiv.org/abs/2508.03402),[Page](https://compvis.github.io/SCFlow/)] ![Code](https://img.shields.io/github/stars/CompVis/SCFlow?style=social&label=Star)\n\n[arxiv 2025.08]  Leveraging Diffusion Models for Stylization using Multiple Style Images [[PDF](https://arxiv.org/pdf/2508.12784)]\n\n[arxiv 2025.08] USO: Unified Style and Subject-Driven Generation via Disentangled and Reward Learning  [[PDF](https://arxiv.org/abs/2508.18966),[Page](https://bytedance.github.io/USO/)] ![Code](https://img.shields.io/github/stars/bytedance/USO?style=social&label=Star)\n\n[arxiv 2025.09]  Towards High-Fidelity, Identity-Preserving Real-Time Makeup Transfer: Decoupling Style Generation [[PDF](https://arxiv.org/abs/2509.02445),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)\n\n[arxiv 2025.09]  OmniStyle2: Scalable and High Quality Artistic Style Transfer Data Generation via Destylization [[PDF](https://arxiv.org/abs/2509.05970),[Page](https://wangyephd.github.io/projects/omnistyle2.html)] \n\n[arxiv 2025.09] One-shot Embroidery Customization via Contrastive LoRA Modulation  [[PDF](https://arxiv.org/pdf/2509.18948),[Page](https://style3d.github.io/embroidery_customization/)] ![Code](https://img.shields.io/github/stars/Style3D/embroidery_customization-impl?style=social&label=Star)\n\n[arxiv 2025.11] V-Shuffle: Zero-Shot Style Transfer via Value Shuffle  [[PDF](https://arxiv.org/abs/2511.06365),[Page](https://github.com/XinR-Tang/V-Shuffle)] ![Code](https://img.shields.io/github/stars/XinR-Tang/V-Shuffle?style=social&label=Star)\n\n[arxiv 2025.11]  A Style is Worth One Code: Unlocking Code-to-Style Image Generation with Discrete Style Space [[PDF](https://arxiv.org/pdf/2511.10555),[Page](https://github.com/Kwai-Kolors.github.io/CoTyle)]\n\n[arxiv 2025.11] Parameter-Efficient MoE LoRA for Few-Shot Multi-Style Editing  [[PDF](https://arxiv.org/abs/2511.11236)]\n\n[arxiv 2025.11] NP-LoRA: Null Space Projection Unifies Subject and Style in LoRA Fusion  [[PDF](https://arxiv.org/pdf/2511.11051)]\n\n[arxiv 2026.02] CoCoDiff: Correspondence-Consistent Diffusion Model for Fine-grained Style Transfer  [[PDF](https://arxiv.org/abs/2602.14464),[Page](https://github.com/Wenbo-Nie/CoCoDiff)] ![Code](https://img.shields.io/github/stars/Wenbo-Nie/CoCoDiff?style=social&label=Star)\n\n[arxiv 2026.02] Cycle-Consistent Tuning for Layered Image Decomposition  [[PDF](https://arxiv.org/pdf/2602.20989)]\n\n[arxiv 2026.02]  CleanStyle: Plug-and-Play Style Conditioning Purification for Text-to-Image Stylization [[PDF](https://arxiv.org/pdf/2602.20721),[Page](https://github.com/Westlake-AGI-Lab/CleanStyle)] ![Code](https://img.shields.io/github/stars/Westlake-AGI-Lab/CleanStyle?style=social&label=Star)\n\n[arxiv 2026.03]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)\n\n\n## RAG\n[arxiv 2025.02]  ImageRAG: Dynamic Image Retrieval for Reference-Guided Image Generation [[PDF](https://arxiv.org/pdf/2502.09411),[Page](https://rotem-shalev.github.io/ImageRAG)] ![Code](https://img.shields.io/github/stars/rotem-shalev/ImageRAG?style=social&label=Star)\n\n[arxiv 2025.05] IA-T2I: Internet-Augmented Text-to-Image Generation  [[PDF](https://arxiv.org/abs/2505.15779)]\n\n\n[arxiv 2026.03]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)\n\n\n## COT\n[arxiv 2025.03]  MINT: Multi-modal Chain of Thought in Unified Generative Models for Enhanced Image Generation [[PDF](https://arxiv.org/pdf/2503.01298)]\n\n[arxiv 2025.05]  T2I-R1: Reinforcing Image Generation with Collaborative Semantic-level and Token-level CoT [[PDF](https://arxiv.org/abs/2505.00703),[Page](https://github.com/CaraJ7/T2I-R1)] ![Code](https://img.shields.io/github/stars/CaraJ7/T2I-R1?style=social&label=Star)\n\n[arxiv 2025.07] CoT-lized Diffusion: Let's Reinforce T2I Generation Step-by-step  [[PDF](https://arxiv.org/abs/2507.04451)]\n\n[arxiv 2025.08]  Uni-cot: Towards Unified Chain-of-Thought Reasoning Across Text and Vision [[PDF](https://arxiv.org/abs/2508.05606),[Page](https://sais-fuxi.github.io/projects/uni-cot/)] ![Code](https://img.shields.io/github/stars/Fr0zenCrane/UniCoT?style=social&label=Star)\n\n[arxiv 2025.09] Easier Painting Than Thinking: Can Text-to-Image Models Set the Stage, but Not Direct the Play?  [[PDF](https://arxiv.org/abs/2509.03516),[Page](https://t2i-corebench.github.io/)] \n\n[arxiv 2025.09] Draw-In-Mind: Learning Precise Image Editing via Chain-of-Thought Imagination  [[PDF](https://arxiv.org/abs/2509.01986),[Page](https://github.com/showlab/DIM)] ![Code](https://img.shields.io/github/stars/showlab/DIM?style=social&label=Star)\n\n[arxiv 2025.09]  PromptEnhancer: A Simple Approach to Enhance Text-to-Image Models via Chain-of-Thought Prompt Rewriting [[PDF](https://arxiv.org/abs/2509.04545),[Page](https://hunyuan-promptenhancer.github.io/)] \n\n[arxiv 2025.09] INTERLEAVING REASONING FOR BETTER TEXT-TO-IMAGE GENERATION  [[PDF](https://arxiv.org/pdf/2509.06945),[Page](https://github.com/Osilly/Interleaving-Reasoning-Generation)] ![Code](https://img.shields.io/github/stars/Osilly/Interleaving-Reasoning-Generation?style=social&label=Star)\n\n[arxiv 2025.11]  Thinking-while-Generating: Interleaving Textual Reasoning throughout Visual Generation [[PDF](https://arxiv.org/abs/2511.16671),[Page](https://think-while-gen.github.io/)] ![Code](https://img.shields.io/github/stars/ZiyuGuo99/Thinking-while-Generating?style=social&label=Star)\n\n[arxiv 2026.03]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)\n\n\n## agent\n[arxiv 2025.03] CoSTA∗: Cost-Sensitive Toolpath Agent for Multi-turn Image Editin  [[PDF](https://arxiv.org/abs/2503.10613),[Page](https://github.com/tianyi-lab/CoSTAR)] ![Code](https://img.shields.io/github/stars/tianyi-lab/CoSTAR?style=social&label=Star)\n\n[arxiv 2025.04] CREA: A Collaborative Multi-Agent Framework for Creative Content Generation with Diffusion Models  [[PDF](https://arxiv.org/abs/2504.05306),[Page](https://crea-diffusion.github.io/)] \n\n[arxiv 2025.06]  ComfyUI-R1: Exploring Reasoning Models for Workflow Generation [[PDF](https://arxiv.org/abs/2506.09790),[Page](https://github.com/AIDC-AI/ComfyUI-Copilot)] ![Code](https://img.shields.io/github/stars/AIDC-AI/ComfyUI-Copilot?style=social&label=Star)\n\n[arxiv 2025.08]  Follow-Your-Instruction: A Comprehensive MLLM Agent for World Data Synthesis [[PDF](https://arxiv.org/abs/2508.05580)]\n\n[arxiv 2025.11]  ImAgent: A Unified Multimodal Agent Framework for Test-Time Scalable Image Generation [[PDF](https://arxiv.org/pdf/2511.11483)]\n\n[arxiv 2025.12]  VisionDirector: Vision-Language Guided Closed-Loop Refinement for Generative Image Synthesis [[PDF](https://arxiv.org/abs/2512.19243)]\n\n[arxiv 2026.02]  PhotoAgent:Where Aesthetic Intent Becomes Visual Reality [[PDF](https://arxiv.org/abs/2602.22809),[Page](https://mdyao.github.io/PhotoAgent/)] ![Code](https://img.shields.io/github/stars/mdyao/PhotoAgent?style=social&label=Star)\n\n[arxiv 2026.03] ScaleEdit-12M: Scaling Open-Source Image Editing Data Generation via Multi-Agent Framework  [[PDF](https://arxiv.org/abs/2603.20644)]\n\n[arxiv 2026.03] IMAGAgent: Orchestrating Multi-Turn Image Editing via Constraint-Aware Planning and Reflection  [[PDF](https://arxiv.org/abs/2603.29602)]\n\n\n[arxiv 2026.03]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)\n\n\n\n## downstream apps\n[arxiv 2023.11]Text-to-Sticker: Style Tailoring Latent Diffusion Models for Human Expression [[PDF](https://arxiv.org/abs/2311.10794)]\n\n[arxiv 2023.11]Paragraph-to-Image Generation with Information-Enriched Diffusion Model [[PDF](https://arxiv.org/abs/2311.14284),[Page](https://weijiawu.github.io/ParaDiffusionPage/)]\n\n[arxiv 2024.02]Text2Street: Controllable Text-to-image Generation for Street Views [[PDf](https://arxiv.org/abs/2402.04504)]\n\n[arxiv 2024.02]FineDiffusion : Scaling up Diffusion Models for Fine-grained Image Generation with 10,000 Classes[[PDF](https://arxiv.org/abs/2402.18331),[Page](https://finediffusion.github.io/)]\n\n[arxiv 2024.03]Text-to-Image Diffusion Models are Great Sketch-Photo Matchmakers [[PDF](https://arxiv.org/abs/2403.07214)]\n\n[arxiv 2024.06] Coherent Zero-Shot Visual Instruction Generation  [[PDF](https://arxiv.org/abs/2406.04337),[Page](https://instruct-vis-zero.github.io/)]\n\n[arxiv 2024.10] Inverse Painting: Reconstructing The Painting Process[[PDF](https://arxiv.org/abs/2409.20556),[Page](https://inversepainting.github.io/)]\n\n[arxiv 2024.11] TKG-DM: Training-free Chroma Key Content Generation Diffusion Model  [[PDF](https://arxiv.org/abs/2411.15580)] \n\n[arxiv 2024.11] ScribbleLight: Single Image Indoor Relighting with Scribbles  [[PDF](https://arxiv.org/abs/2411.17696)] \n\n[arxiv 2024.11] ChatGen: Automatic Text-to-Image Generation From FreeStyle Chatting  [[PDF](https://arxiv.org/abs/2411.17176)]\n\n[arxiv 2025.01] EmotiCrafter: Text-to-Emotional-Image Generation based on Valence-Arousal Model  [[PDF](https://arxiv.org/pdf/2501.05710)]\n\n[arxiv 2025.03] DesignDiffusion: High-Quality Text-to-Design Image Generation with Diffusion Models  [[PDF](https://arxiv.org/pdf/2503.01645)]\n\n[arxiv 2025.03]  A Recipe for Generating 3D Worlds From a Single Image [[PDF](https://katjaschwarz.github.io/worlds/),[Page](https://katjaschwarz.github.io/worlds/)] \n\n[arxiv 2025.03]  Parametric Shadow Control for Portrait Generationin Text-to-Image Diffusion Models [[PDF](https://arxiv.org/abs/2503.21943)]\n\n[arxiv 2025.04]  MAD: Makeup All-in-One with Cross-Domain Diffusion Model [[PDF](https://arxiv.org/abs/2504.02545),[Page](https://basiclab.github.io/MAD)] \n\n[arxiv 2025.04] PosterMaker: Towards High-Quality Product Poster Generation with Accurate Text Rendering  [[PDF](https://arxiv.org/abs/2504.06632),[Page](https://poster-maker.github.io/)] ![Code](https://img.shields.io/github/stars/eafn/PosterMaker?style=social&label=Star)\n\n[arxiv 2025.04] MirrorVerse: Pushing Diffusion Models to Realistically Reflect the World  [[PDF](https://arxiv.org/abs/2504.15397),[Page](https://mirror-verse.github.io/)] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)\n\n[arxiv 2025.06] PosterCraft: Rethinking High-Quality Aesthetic Poster Generation in a Unified Framework  [[PDF](https://arxiv.org/abs/2506.10741),[Page](https://ephemeral182.github.io/PosterCraft/)] ![Code](https://img.shields.io/github/stars/Ephemeral182/PosterCraft?style=social&label=Star)\n\n[arxiv 2025.07]  FreeMorph: Tuning-Free Generalized Image Morphing with Diffusion Model [[PDF](https://arxiv.org/abs/2507.01953),[Page](https://yukangcao.github.io/FreeMorph/)] ![Code](https://img.shields.io/github/stars/yukangcao/FreeMorph?style=social&label=Star)\n\n[arxiv 2025.07]  DreamPoster: A Unified Framework for Image-Conditioned Generative Poster Design [[PDF](https://arxiv.org/abs/2507.04218),[Page](https://dreamposter.github.io/)] \n\n[arxiv 2025.08]  Draw Your Mind: Personalized Generation via Condition-Level Modeling in Text-to-Image Diffusion Models [[PDF](https://arxiv.org/pdf/2508.03481),[Page](https://github.com/Burf/DrUM)] ![Code](https://img.shields.io/github/stars/Burf/DrUM?style=social&label=Star)\n\n[arxiv 2025.08]  TransLight: Image-Guided Customized Lighting Control with Generative Decoupling [[PDF](https://arxiv.org/abs/2508.14814)]\n\n[arxiv 2025.08]  Interact-Custom: Customized Human Object Interaction Image Generation [[PDF](https://arxiv.org/pdf/2508.19575),[Page](https://github.com/XZPKU/Inter-custom)] ![Code](https://img.shields.io/github/stars/XZPKU/Inter-custom?style=social&label=Star)\n\n[arxiv 2025.10] Paper2Web: Let's Make Your Paper Alive!  [[PDF](https://arxiv.org/abs/2510.15842),[Page](https://github.com/YuhangChen1/Paper2All)] ![Code](https://img.shields.io/github/stars/YuhangChen1/Paper2All?style=social&label=Star)\n\n[arxiv 2025.10] Stroke2Sketch: Harnessing Stroke Attributes for Training-Free Sketch Generation  [[PDF](https://arxiv.org/abs/2510.16319),[Page](https://github.com/rane7/Stroke2Sketch)] ![Code](https://img.shields.io/github/stars/rane7/Stroke2Sketch?style=social&label=Star)\n\n[arxiv 2025.10] Visual Diffusion Models are Geometric Solvers  [[PDF](https://arxiv.org/abs/2510.21697),[Page](https://kariander1.github.io/visual-geo-solver/)] ![Code](https://img.shields.io/github/stars/kariander1/visual-geo-solver?style=social&label=Star)\n\n[arxiv 2025.12]  PosterCopilot: Toward Layout Reasoning and Controllable Editing for Professional Graphic Design [[PDF](https://arxiv.org/abs/2512.04082),[Page](https://postercopilot.github.io/)] ![Code](https://img.shields.io/github/stars/JiazheWei/PosterCopilot?style=social&label=Star)\n\n[arxiv 2025.12] StereoSpace: Depth-Free Synthesis of Stereo Geometry via End-to-End Diffusion in a Canonical Space  [[PDF](https://arxiv.org/abs/2512.10959),[Page](https://huggingface.co/spaces/prs-eth/stereospace_web)] \n\n[arxiv 2025.12]  Generative Refocusing: Flexible Defocus Control from a Single Image [[PDF](https://arxiv.org/abs/2512.16923),[Page](https://generative-refocusing.github.io/)] ![Code](https://img.shields.io/github/stars/rayray9999/Genfocus?style=social&label=Star)\n\n[arxiv 2026.01]  PI-Light: Physics-Inspired Diffusion for Full-Image Relighting [[PDF](https://arxiv.org/abs/2601.22135),[Page](https://github.com/ZhexinLiang/PI-Light)] ![Code](https://img.shields.io/github/stars/ZhexinLiang/PI-Light?style=social&label=Star)\n\n[arxiv 2026.01]  Creative Image Generation with Diffusion Model [[PDF](https://arxiv.org/abs/2601.22125),[Page](https://creative-t2i.github.io/)]\n\n[arxiv 2026.01]  PaperBanana: Automating Academic Illustration for AI Scientists [[PDF](https://arxiv.org/abs/2601.23265),[Page](https://dwzhu-pku.github.io/PaperBanana/)] ![Code](https://img.shields.io/github/stars/dwzhu-pku/PaperBanana?style=social&label=Star)\n\n[arxiv 2026.01] DuoGen: Towards General Purpose Interleaved Multimodal Generation  [[PDF](https://arxiv.org/abs/2602.00508),[Page](https://research.nvidia.com/labs/dir/duogen/)] \n\n[arxiv 2026.03] LogoDiffuser: Training-Free Multilingual Logo Generation and Stylization via Letter-Aware Attention Control  [[PDF](https://arxiv.org/pdf/2603.09759)]\n\n[arxiv 2026.03] Omni-I2C: A Holistic Benchmark for High-Fidelity Image-to-Code Generation  [[PDF](https://arxiv.org/abs/2603.17508),[Page](https://github.com/MiliLab/Omni-I2C)] ![Code](https://img.shields.io/github/stars/MiliLab/Omni-I2C?style=social&label=Star)\n\n[arxiv 2026.03] FontCrafter: High-Fidelity Element-Driven Artistic Font Creation with Visual In-Context Generation  [[PDF](https://arxiv.org/abs/2603.22054)]\n\n\n[arxiv 2026.03] PSDesigner: Automated Graphic Design with a Human-Like Creative Workflow  [[PDF](https://arxiv.org/abs/2603.25738),[Page](https://henghuiding.com/PSDesigner/)]\n\n[arxiv 2026.03] Unify-Agent: A Unified Multimodal Agent for World-Grounded Image Synthesis  [[PDF](https://arxiv.org/abs/2603.29620),[Page](https://github.com/shawn0728/Unify-Agent)] ![Code](https://img.shields.io/github/stars/shawn0728/Unify-Agent?style=social&label=Star)\n\n\n[arxiv 2026.03]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)\n\n\n\n## test-time computation\n[arxiv 2025.01]  Can We Generate Images with CoT? Let's Verify and Reinforce Image Generation Step by Step [[PDF](https://arxiv.org/abs/2501.13926),[Page](https://github.com/ZiyuGuo99/Image-Generation-CoT)] ![Code](https://img.shields.io/github/stars/ZiyuGuo99/Image-Generation-CoT?style=social&label=Star)\n\n[arxiv 2025.03] Test-Time Visual In-Context Tuning  [[PDF](https://arxiv.org/html/2503.21777v1)]\n\n[arxiv 2025.04]  From Reflection to Perfection: Scaling Inference-Time Optimization for Text-to-Image Diffusion Models via Reflection Tuning [[PDF](https://arxiv.org/abs/2504.16080),[Page](https://diffusion-cot.github.io/reflection2perfection/)] ![Code](https://img.shields.io/github/stars/Diffusion-CoT/ReflectionFlow?style=social&label=Star)\n\n\n[arxiv 2026.03]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)\n\n\n\n## Joint generation\n\n[arxiv 2025.01] Orchid: Image Latent Diffusion for Joint Appearance and Geometry Generation  [[PDF](https://arxiv.org/abs/2501.13087),[Page](https://orchid3d.github.io/)] \n\n[arxiv 2026.03]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)\n\n\n## camera \n[arxiv 2025.01] PreciseCam: Precise Camera Control for Text-to-Image Generation  [[PDF](https://arxiv.org/abs/2501.12910),[Page](https://graphics.unizar.es/projects/PreciseCam2024/#)]\n\n[arxiv 2025.10]  DiffCamera: Arbitrary Refocusing on Images [[PDF](https://arxiv.org/abs/2509.26599)]\n\n\n[arxiv 2026.03]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)\n\n\n## Mesh generation\n[arxiv 2024.09]  EdgeRunner: Auto-regressive Auto-encoder for Artistic Mesh Generation [[PDF](https://arxiv.org/abs/2409.18114),[Page](https://research.nvidia.com/labs/dir/edgerunner/)]\n\n[arxiv 2026.03]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)\n\n## depth \n[arxiv 2024.09] Lotus: Diffusion-based Visual Foundation Model for High-quality Dense Prediction  [[PDF](https://arxiv.org/abs/2409.18124),[Page](https://lotus3d.github.io/)]\n\n[arxiv 2024.09]Self-Distilled Depth Refinement with Noisy Poisson Fusion [[PDF](https://arxiv.org/abs/2409.17880),[Page](https://github.com/lijia7/SDDR)]\n\n[arxiv 2026.03]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)\n\n## layer \n[arxiv 2025.12] Qwen-Image-Layered: Towards Inherent Editability via Layer Decomposition  [[PDF](https://arxiv.org/abs/2512.15603),[Page](https://github.com/QwenLM/Qwen-Image-Layered)] ![Code](https://img.shields.io/github/stars/QwenLM/Qwen-Image-Layered?style=social&label=Star)\n\n[arxiv 2026.01]  Controllable Layered Image Generation for Real-World Editing [[PDF](https://arxiv.org/pdf/2601.15507),[Page](https://rayjryang.github.io/LASAGNA-Page/)]\n\n[arxiv 2026.03]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)\n\n\n\n## scaling\n\n[arxiv 2024.10]  FINE: Factorizing Knowledge for Initialization of Variable-sized Diffusion Models [[PDF](https://arxiv.org/pdf/2409.19289)]\n\n[arxiv 2024.12]  Efficient Scaling of Diffusion Transformers for Text-to-Image Generation [[PDF](https://arxiv.org/abs/2412.12391)]\n\n\n[arxiv 2026.03]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)\n\n\n## disentanglement\n[ICMR 2023]Not Only Generative Art: Stable Diffusion for Content-Style Disentanglement in Art Analysis [[PDF](https://arxiv.org/abs/2304.10278)]\n\n\n## Face ID \n[arxiv 2022.12]HS-Diffusion: Learning a Semantic-Guided Diffusion Model for Head Swapping[[PDF](https://arxiv.org/pdf/2212.06458.pdf)]\n\n[arxiv 2023.06]Inserting Anybody in Diffusion Models via Celeb Basis [[PDF](https://arxiv.org/abs/2306.00926), [Page](https://celeb-basis.github.io/)]\n\n[arxiv 2023.07]DreamIdentity: Improved Editability for Efficient Face-identity Preserved Image Generation [[PDF](https://arxiv.org/abs/2307.00300),[Page](https://dreamidentity.github.io/)]\n\n[arxiv 2024.10] FuseAnyPart: Diffusion-Driven Facial Parts Swapping via Multiple Reference Images  [[PDF](https://arxiv.org/abs/2410.22771),[Page](https://thomas-wyh.github.io/)]\n\n[arxiv 2025.02] GHOST 2.0: Generative High-fidelity One Shot Transfer of Heads  [[PDF](https://arxiv.org/pdf/2502.18417)]\n\n[arxiv 2025.03]  HyperLoRA: Parameter-Efficient Adaptive Generation for Portrait Synthesis [[PDF](https://arxiv.org/pdf/2503.16944)]\n\n\n[arxiv 2026.03]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)\n\n\n\n## scene composition\n[arxiv 2023.02]MIXTURE OF DIFFUSERS FOR SCENE COMPOSITION AND HIGH RESOLUTION IMAGE GENERATION [[PDF](https://arxiv.org/abs/2302.02412)]\n\n[arxiv 2023.02]Cross-domain Compositing with Pretrained Diffusion Models[[PDF](https://arxiv.org/abs/2302.10167)]\n\n\n## hand writing \n[arxiv 2023.03]WordStylist: Styled Verbatim Handwritten Text Generation with Latent Diffusion Models[[PDF](https://arxiv.org/abs/2303.16576)]\n\n\n## speed\n[arxiv 2023.05]FISEdit: Accelerating Text-to-image Editing via Cache-enabled Sparse Diffusion Inference [[PDF](https://arxiv.org/abs/2305.17423)]\n\n[arxiv 2023.06]Fast Training of Diffusion Models with Masked Transformers [[PDF](https://arxiv.org/pdf/2306.09305.pdf)]\n\n[arxiv 2023.06]Fast Diffusion Model [[PDF](https://arxiv.org/abs/2306.06991)]\n\n[arxiv 2023.06]Masked Diffusion Models are Fast Learners [[PDF](https://arxiv.org/abs/2306.11363)]\n\n[arxiv 2023.06]Latent Consistency Models: Synthesizing High-Resolution Images with Few-Step Inference [[PDF](https://arxiv.org/abs/2310.04378)]\n\n[arxiv 2023.11]UFOGen: You Forward Once Large Scale Text-to-Image Generation via Diffusion GANs [[PDF](https://arxiv.org/abs/2311.09257)]\n\n[arxiv 2023.11]AdaDiff: Adaptive Step Selection for Fast Diffusion [[PDF](https://arxiv.org/abs/2311.14768)]\n\n[arxiv 2023.11]MobileDiffusion: Subsecond Text-to-Image Generation on Mobile Devices [[PDF](https://arxiv.org/abs/2311.16567)]\n\n[arxiv 2023.11]Manifold Preserving Guided Diffusion [[PDF](https://arxiv.org/abs/2311.16424)]\n\n[arxiv 2023.11]LCM-LoRA: A Universal Stable-Diffusion Acceleration Module [[PDF](https://arxiv.org/abs/2311.05556),[Page](https://github.com/luosiallen/latent-consistency-model)]\n\n[arxiv 2023.11]Adversarial Diffusion Distillation [[PDF](https://arxiv.org/abs/2311.17042),[Page](https://huggingface.co/stabilityai/)]\n\n[arxiv 2023.12]One-step Diffusion with Distribution Matching Distillation [[PDF](https://arxiv.org/abs/2311.18828),[Page](https://tianweiy.github.io/dmd/)]\n\n[arxiv 2023.12]SwiftBrush: One-Step Text-to-Image Diffusion Model with Variational Score Distillation [[PDF](https://arxiv.org/abs/2312.05239), [Page](https://thuanz123.github.io/swiftbrush/)]\n\n[arxiv 2023.12]SpeedUpNet: A Plug-and-Play Hyper-Network for Accelerating Text-to-Image Diffusion Models [[PDF](https://arxiv.org/abs/2312.08887)]\n\n[arxiv 2023.12]Not All Steps are Equal: Efficient Generation with Progressive Diffusion Models [[PDF](https://arxiv.org/abs/2312.13307)]\n\n[arxiv 2024.01]Fast Inference Through The Reuse Of Attention Maps In Diffusion Models [[PDF](https://arxiv.org/abs/2401.01008)]\n\n[arxiv 2024.02]SDXL-Lightning: Progressive Adversarial Diffusion Distillation[[PDF](https://arxiv.org/abs/2402.13929),[Page](https://huggingface.co/ByteDance/SDXL-Lightning)]\n\n[arxiv 2024.03]DistriFusion: Distributed Parallel Inference for High-Resolution Diffusion Models [[PDF](https://arxiv.org/abs/2402.19481), [Page](https://hanlab.mit.edu/projects/distrifusion)]\n\n[arxiv 2024.03]Fast High-Resolution Image Synthesis with Latent Adversarial Diffusion Distillation [[PDF](https://arxiv.org/abs/2403.12015)]\n\n[arxiv 2024.03]You Only Sample Once: Taming One-Step Text-To-Image Synthesis by Self-Cooperative Diffusion GANs [[PDF](https://arxiv.org/abs/2403.12931)]\n\n[arxiv 2024.04]T-GATE: Cross-Attention Makes Inference Cumbersome in Text-to-Image Diffusion Models [[PDF](https://arxiv.org/abs/2404.02747),[Page](https://github.com/HaozheLiu-ST/T-GATE)]\n\n[arxiv 2024.04]BinaryDM: Towards Accurate Binarization of Diffusion Model [[PDF](https://arxiv.org/abs/2404.05662), [Page](https://github.com/Xingyu-Zheng/BinaryDM)]\n\n[arxiv 2024.04]LAPTOP-Diff: Layer Pruning and Normalized Distillation for Compressing Diffusion Models [[PDF](https://arxiv.org/abs/2404.11098)]\n\n[arxiv 2024.04]Hyper-SD: Trajectory Segmented Consistency Model for Efficient Image Synthesis [[PDF](https://arxiv.org/abs/2404.13686)]\n\n[arxiv 2024.05] Improved Distribution Matching Distillation for Fast Image Synthesis [[PDF](https://arxiv.org/abs/2405.14867), [Page](https://tianweiy.github.io/dmd2)]\n\n[arxiv 2024.05]PaGoDA: Progressive Growing of a One-Step Generator from a Low-Resolution Diffusion Teacher [[PDF](https://arxiv.org/abs/2405.14822)]\n\n[arxiv 2024.05]PipeFusion: Displaced Patch Pipeline Parallelism for Inference of Diffusion Transformer Models [[PDF](https://arxiv.org/abs/2405.14430)]\n\n[arxiv 2024.05]Reward Guided Latent Consistency Distillation[[PDF](https://arxiv.org/abs/2403.11027), [Page](https://rg-lcd.github.io/)]\n\n[arxiv 2024.06]Diffusion Models Are Innate One-Step Generators [[PDF](https://arxiv.org/abs/2405.20750)]\n\n[arxiv 2024.06] ViDiT-Q: Efficient and Accurate Quantization of Diffusion Transformers for Image and Video Generation[[PDF](https://arxiv.org/abs/2406.02540), [Page](https://a-suozhang.xyz/viditq.github.io/)]\n\n[arxiv 2024.06]Flash Diffusion: Accelerating Any Conditional Diffusion Model for Few Steps Image Generation\n [[PDF](https://arxiv.org/abs/2406.02347), [Page](https://github.com/gojasper/flash-diffusion)]\n\n[arxiv 2024.06]Invertible Consistency Distillation for Text-Guided Image Editing in Around 7 Steps [[PDF](https://arxiv.org/abs/2406.14539), [Page](https://yandex-research.github.io/invertible-cd/)]\n\n[arxiv 2024.06]Immiscible Diffusion: Accelerating Diffusion Training with Noise Assignment[[PDF](https://arxiv.org/abs/2406.12303)]\n\n[arxiv 2024.07]Minutes to Seconds: Speeded-up DDPM-based Image Inpainting with Coarse-to-Fine Sampling [[PDF](https://arxiv.org/abs/2407.05875), [Page](https://github.com/linghuyuhangyuan/M2S)]\n\n[arxiv 2024.07]Efficient Training with Denoised Neural Weights [[PDF](https://arxiv.org/abs/2407.11966), [Page](https://yifanfanfanfan.github.io/denoised-weights/)]\n\n[arxiv 2024.07]SlimFlow: Training Smaller One-Step Diffusion Models with Rectified Flow[[PDF](https://arxiv.org/abs/2407.12718)]\n\n[arxiv 2024.08] TurboEdit: Text-Based Image Editing Using Few-Step Diffusion Models[[PDF](https://arxiv.org/abs/2408.00735), [Page](https://turboedit-paper.github.io/)]\n\n[arxiv 2024.08] A Simple Early Exiting Framework for Accelerated Sampling in Diffusion Models[[PDF](https://arxiv.org/abs/2408.05927)]\n\n\n[arxiv 2024.08]Low-Bitwidth Floating Point Quantization for Efficient High-Quality Diffusion Models [[PDF](https://arxiv.org/abs/2408.06995)]\n\n[arxiv 2024.08]PFDiff: Training-free Acceleration of Diffusion Models through the Gradient Guidance of Past and Future [[PDF](https://arxiv.org/abs/2408.08822)]\n\n[arxiv 2024.08] SwiftBrush v2: Make Your One-step Diffusion Model Better Than Its Teacher[[PDF](https://arxiv.org/abs/2408.14176)]\n\n[arxiv 2024.08]Distribution Backtracking Builds A Faster Convergence Trajectory for One-step Diffusion Distillation [[PDF](https://arxiv.org/abs/2408.15991), [Page](https://github.com/SYZhang0805/DisBack)]\n\n[arxiv 2024.09]VQ4DiT: Efficient Post-Training Vector Quantization for Diffusion Transformers[[PDF](https://arxiv.org/abs/2408.17131)]\n\n[arxiv 2024.09]Accurate Compression of Text-to-Image Diffusion Models via Vector Quantization [[PDF](https://arxiv.org/pdf/%3CARXIV%20PAPER%20ID%3E.pdf), [Page](https://yandex-research.github.io/vqdm/)]\n\n[arxiv 2024.09]FlowTurbo: Towards Real-time Flow-Based Image Generation with Velocity Refiner [[PDF](https://arxiv.org/abs/2409.18128), [Page](https://github.com/shiml20/FlowTurbo)]\n\n[arxiv 2024.10] Simple and Fast Distillation of Diffusion Models  [[PDF](https://arxiv.org/abs/2409.19681),[Page](https://github.com/zju-pi/diff-sampler)]\n\n[arxiv 2024.10] Relational Diffusion Distillation for Efficient Image Generation [[PDF](https://arxiv.org/abs/2410.07679),[Page]()]\n\n[arxiv 2024.10] Efficient High-Resolution Image Synthesis with Linear Diffusion Transformer  [[PDF](https://arxiv.org/abs/2410.10629),[Page](https://nvlabs.github.io/Sana/)]\n\n[arxiv 2024.10] FasterDiT: Towards Faster Diffusion Transformers Training without Architecture Modification [[PDF](https://arxiv.org/abs/2410.10356),[Page]()]\n\n[arxiv 2024.10]  Efficient Diffusion Models: A Comprehensive Survey from Principles to Practices[[PDF](https://arxiv.org/abs/2410.11795),[Page](https://github.com/ponyzym/Efficient-DMs-Survey)]\n\n[arxiv 2024.10] One Step Diffusion via Shortcut Models  [[PDF](https://arxiv.org/abs/2410.12557),[Page](https://github.com/kvfrans/shortcut-models)]\n\n[arxiv 2024.10] BiGR: Harnessing Binary Latent Codes for Image Generation and Improved Visual Representation Capabilities [[PDF](https://arxiv.org/abs/2410.14672),[Page](https://haoosz.github.io/BiGR)]\n\n[arxiv 2024.10] One-Step Diffusion Distillation through Score Implicit Matching  [[PDF](https://arxiv.org/abs/2410.16794),[Page]()]\n\n[arxiv 2024.10] DiP-GO: A Diffusion Pruner via Few-step Gradient Optimization  [[PDF](https://arxiv.org/abs/2410.16942)]\n\n[arxiv 2024.10] Simplifying, stabilizing, and scaling continuous-time consistency models  [[PDF](https://arxiv.org/abs/2410.11081),[Page](https://openai.com/index/simplifying-stabilizing-and-scaling-continuous-time-consistency-models/)]\n\n[arxiv 2024.10] Fast constrained sampling in pre-trained diffusion models  [[PDF](https://arxiv.org/abs/2410.18804)]\n\n[arxiv 2024.10]  Flow Generator Matching [[PDF](https://arxiv.org/abs/2410.19310),[Page]()]\n\n[arxiv 2024.10] Multi-student Diffusion Distillation for Better One-step Generators [[PDF](https://arxiv.org/abs/2410.23274),[Page](https://research.nvidia.com/labs/toronto-ai/MSD/)]\n\n[arxiv 2024.10] Diff-Instruct*: Towards Human-Preferred One-step Text-to-image Generative Models  [[PDF](https://arxiv.org/abs/2410.20898),[Page]()]\n\n[arxiv 2024.11] SVDQunat: Absorbing Outliers by Low-Rank Components for 4-Bit Diffusion Models  [[PDF](https://arxiv.org/abs/2411.05007),[Page](https://hanlab.mit.edu/projects/svdquant)]\n\n[arxiv 2024.11]  Leveraging Previous Steps: A Training-free Fast Solver for Flow Diffusion [[PDF](https://arxiv.org/abs/2411.07627)]\n\n[arxiv 2024.11] Adaptive Non-Uniform Timestep Sampling for Diffusion Model Training [[PDF](https://arxiv.org/abs/2411.09998)]\n\n[arxiv 2024.11]  PoM: Efficient Image and Video Generation with the Polynomial Mixer [[PDF](https://arxiv.org/abs/2411.12663),[Page](https://github.com/davidpicard/HoMM)]\n\n[arxiv 2024.12]  TSD-SR: One-Step Diffusion with Target Score Distillation for Real-World Image Super-Resolution [[PDF](https://arxiv.org/abs/2411.18263)] \n\n\n[arxiv 2024.12] SNOOPI: Supercharged One-step Diffusion Distillation with Proper Guidance  [[PDF](https://arxiv.org/abs/2412.02687),[Page](https://snoopi-onestep.github.io/)] ![Code](https://img.shields.io/github/stars/VinAIResearch/SNOOPI?style=social&label=Star)\n\n[arxiv 2024.12] Schedule On the Fly: Diffusion Time Prediction for Faster and Better Image Generation  [[PDF](https://arxiv.org/abs/2412.01243)]\n\n[arxiv 2024.12]  SwiftEdit: Lightning Fast Text-Guided Image Editing via One-Step Diffusion [[PDF](https://arxiv.org/abs/2412.04301),[Page](https://swift-edit.github.io/)] \n\n[arxiv 2024.12]  Effortless Efficiency: Low-Cost Pruning of Diffusion Models [[PDF](https://arxiv.org/pdf/2412.02852.pdf),[Page](https://yangzhang-v5.github.io/EcoDiff/)] ![Code](https://img.shields.io/github/stars/YaNgZhAnG-V5/EcoDiff?style=social&label=Star)\n\n[arxiv 2024.12] FlexDiT: Dynamic Token Density Control for Diffusion Transformer  [[PDF](https://arxiv.org/abs/2412.06028)]\n\n[arxiv 2024.12] SnapGen: Taming High-Resolution Text-to-Image Models for Mobile Devices with Efficient Architectures and Training  [[PDF](),[Page](https://snap-research.github.io/snapgen/)] \n\n[arxiv 2024.12]  Self-Corrected Flow Distillation for Consistent One-Step and Few-Step Text-to-Image Generation [[PDF](https://arxiv.org/abs/2412.16906)]\n\n[arxiv 2024.12]  LatentCRF: Continuous CRF for Efficient Latent Diffusion [[PDF](https://arxiv.org/pdf/2412.18596)]\n\n[arxiv 2025.01] DGQ: Distribution-Aware Group Quantization for Text-to-Image Diffusion Models  [[PDF](https://arxiv.org/abs/2501.04304),[Page](https://ugonfor.kr/DGQ/)] ![Code](https://img.shields.io/github/stars/ugonfor/DGQ?style=social&label=Star)\n\n[arxiv 2025.01] Dissecting Bit-Level Scaling Laws in Quantizing Vision Generative Models  [[PDF](https://arxiv.org/abs/2501.06218)]\n\n[arxiv 2025.01]  Accelerate High-Quality Diffusion Models with Inner Loop Feedback [[PDF](https://arxiv.org/abs/2501.13107),[Page](https://mgwillia.github.io/ilf/)] \n\n[arxiv 2025.02]  Improved Training Technique for Latent Consistency Models [[PDF](https://arxiv.org/abs/2502.01441),[Page](https://github.com/quandao10/sLCT/)] ![Code](https://img.shields.io/github/stars/quandao10/sLCT?style=social&label=Star)\n\n[arxiv 2025.02] Region-Adaptive Sampling for Diffusion Transformers  [[PDF](https://arxiv.org/pdf/2502.10389),[Page](https://microsoft.github.io/RAS/)] ![Code](https://img.shields.io/github/stars/microsoft/RAS?style=social&label=Star)\n\n[arxiv 2025.02] One-step Diffusion Models with f-Divergence Distribution Matching  [[PDF](https://arxiv.org/abs/2502.15681),[Page](https://research.nvidia.com/labs/genair/f-distill/)] \n\n[arxiv 2025.02] TraFlow: Trajectory Distillation on Pre-Trained Rectified Flow  [[PDF](https://arxiv.org/pdf/2502.16972)]\n\n[arxiv 2025.02]  SpargeAttn: Accurate Sparse Attention Accelerating Any Model Inference\n [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)\n\n[arxiv 2025.03] Q&C: When Quantization Meets Cache in Efficient Image Generation  [[PDF](https://arxiv.org/pdf/2503.02508),[Page](https://github.com/xinding-sys/Quant-Cache)] ![Code](https://img.shields.io/github/stars/xinding-sys/Quant-Cache?style=social&label=Star)\n\n[arxiv 2025.03]  Optimizing for the Shortest Path in Denoising Diffusion Model [[PDF](https://arxiv.org/pdf/2503.03265),[Page](https://github.com/UnicomAI/ShortDF)] ![Code](https://img.shields.io/github/stars/UnicomAI/ShortDF?style=social&label=Star)\n\n[arxiv 2025.03]  Effective and Efficient Masked Image Generation Models [[PDF](https://arxiv.org/abs/2503.07197)]\n\n[arxiv 2025.03] Learning Few-Step Diffusion Models by Trajectory Distribution Matching  [[PDF](https://arxiv.org/pdf/2503.06674),[Page](https://tdm-t2x.github.io/)] ![Code](https://img.shields.io/github/stars/Luo-Yihong/TDM?style=social&label=Star)\n\n[arxiv 2025.03] NAMI: Efficient Image Generation via Progressive Rectified Flow Transformers  [[PDF](https://arxiv.org/pdf/2503.09242)]\n\n[arxiv 2025.03] Distilling Diversity and Control in Diffusion Models  [[PDF](https://arxiv.org/pdf/2503.10637),[Page](https://distillation.baulab.info/)] ![Code](https://img.shields.io/github/stars/rohitgandikota/distillation?style=social&label=Star)\n\n\n[arxiv 2025.03]  FP4DiT: Towards Effective Floating Point Quantization for Diffusion Transformers [[PDF](https://arxiv.org/abs/2503.15465),[Page](https://github.com/cccrrrccc/FP4DiT)] ![Code](https://img.shields.io/github/stars/cccrrrccc/FP4DiT?style=social&label=Star)\n\n\n[arxiv 2025.03] Di[M]O: Distilling Masked Diffusion Models into One-step Generator  [[PDF](https://arxiv.org/abs/2503.15457),[Page](https://yuanzhi-zhu.github.io/DiMO/)] ![Code](https://img.shields.io/github/stars/yuanzhi-zhu/DiMO?style=social&label=Star)\n\n[arxiv 2025.03]  BlockDance: Reuse Structurally Similar Spatio-Temporal Features to Accelerate Diffusion Transformers [[PDF](https://arxiv.org/pdf/2503.15927)]\n\n[arxiv 2025.04]  DyDiT++: Dynamic Diffusion Transformers for Efficient Visual Generation [[PDF](https://arxiv.org/abs/2504.06803),[Page](https://github.com/alibaba-damo-academy/DyDiT)] ![Code](https://img.shields.io/github/stars/alibaba-damo-academy/DyDiT?style=social&label=Star)\n\n[arxiv 2025.05]  Quantization of Diffusion Models [[PDF](https://arxiv.org/abs/2505.05215),[Page](https://github.com/TaylorJocelyn/Diffusion-Model-Quantization)] ![Code](https://img.shields.io/github/stars/TaylorJocelyn/Diffusion-Model-Quantization?style=social&label=Star)\n\n[arxiv 2025.06] Cost-Aware Routing for Efficient Text-To-Image Generation  [[PDF](https://arxiv.org/pdf/2506.14753)]\n\n[arxiv 2025.06]  Make It Efficient: Dynamic Sparse Attention for Autoregressive Image Generation [[PDF](https://arxiv.org/abs/2506.18226)]\n\n[arxiv 2025.06]  Morse: Dual-Sampling for Lossless Acceleration of Diffusion Models [[PDF](https://arxiv.org/abs/2506.18251),[Page](https://github.com/deep-optimization/Morse)] ![Code](https://img.shields.io/github/stars/deep-optimization/Morse?style=social&label=Star)\n\n[arxiv 2025.06] Inverse-and-Edit: Effective and Fast Image Editing by Cycle Consistency Models  [[PDF](https://arxiv.org/abs/2506.19103),[Page](https://github.com/ControlGenAI/Inverse-and-Edit/)] ![Code](https://img.shields.io/github/stars/ControlGenAI/Inverse-and-Edit/?style=social&label=Star)\n\n[arxiv 2025.07]  Upsample What Matters: Region-Adaptive Latent  Sampling for Accelerated Diffusion Transformers [[PDF](https://arxiv.org/pdf/2507.08422)]\n\n[arxiv 2025.08] InstantEdit: Text-Guided Few-Step Image Editing with Piecewise Rectified Flow  [[PDF](https://arxiv.org/pdf/2508.06033),[Page](https://github.com/Supercomputing-System-AI-Lab/InstantEdit)] ![Code](https://img.shields.io/github/stars/Supercomputing-System-AI-Lab/InstantEdit?style=social&label=Star)\n\n[arxiv 2025.08] OmniCache: A Trajectory-Oriented Global Perspective on Training-Free Cache Reuse for Diffusion Transformer Models  [[PDF](https://arxiv.org/pdf/2508.16212)]\n\n[arxiv 2025.08]  Forecast then Calibrate: Feature Caching as ODE for Efficient Diffusion Transformers [[PDF](https://arxiv.org/pdf/2508.16211)]\n\n[arxiv 2025.08] DiCache: Let Diffusion Model Determine Its Own Cache  [[PDF](https://arxiv.org/abs/2508.17356),[Page](https://github.com/Bujiazi/DiCache)] ![Code](https://img.shields.io/github/stars/Bujiazi/DiCache?style=social&label=Star)\n\n[arxiv 2025.09] RAPID^3: Tri-Level Reinforced Acceleration Policies for Diffusion Transformer  [[PDF](https://arxiv.org/abs/2509.22323)]\n\n[arxiv 2025.09] DistillKac: Few-Step Image Generation via Damped Wave Equations  [[PDF](https://arxiv.org/abs/2509.21513)]\n\n[arxiv 2025.10] OBS-Diff: Accurate Pruning For Diffusion Models in One-Shot  [[PDF](https://arxiv.org/abs/2510.06751),[Page](https://github.com/Alrightlone/OBS-Diff)] ![Code](https://img.shields.io/github/stars/Alrightlone/OBS-Diff?style=social&label=Star)\n\n[arxiv 2025.10]  One-step Diffusion Models with Bregman Density Ratio Matching [[PDF](https://arxiv.org/pdf/2510.16983)]\n\n[arxiv 2025.10] AlphaFlow: Understanding and Improving MeanFlow Models  [[PDF](https://arxiv.org/abs/2510.20771),[Page](https://github.com/snap-research/alphaflow)] ![Code](https://img.shields.io/github/stars/snap-research/alphaflow?style=social&label=Star)\n\n[arxiv 2025.10] Sparser Block-Sparse Attention via Token Permutation  [[PDF](https://arxiv.org/abs/2510.21270),[Page](https://github.com/xinghaow99/pbs-attn)] ![Code](https://img.shields.io/github/stars/xinghaow99/pbs-attn?style=social&label=Star)\n\n[arxiv 2025.10] ETC: training-free diffusion models acceleration with Error-aware Trend Consistency  [[PDF](https://arxiv.org/abs/2510.24129)]\n\n[arxiv 2025.11] FreeControl: Efficient, Training-Free Structural Control via One-Step Attention Extraction  [[PDF](https://arxiv.org/abs/2511.05219)]\n\n[arxiv 2025.11]  FlowSteer: Guiding Few-Step Image Synthesis with Authentic Trajectories [[PDF](https://arxiv.org/abs/2511.18834)]\n\n[arxiv 2025.12]  Straighter and Faster: Efficient One-Step Generative Modeling via Meanflow on Rectified Trajectories [[PDF](https://arxiv.org/abs/2511.23342),[Page](https://github.com/Xinxi-Zhang/Re-MeanFlow)] ![Code](https://img.shields.io/github/stars/Xinxi-Zhang/Re-MeanFlow?style=social&label=Star)\n\n[arxiv 2025.12] ConvRot: Rotation-Based Plug-and-Play 4-bit Quantization for Diffusion Transformers  [[PDF](https://arxiv.org/abs/2512.03673)]\n\n[arxiv 2025.12] Bridging Fidelity-Reality with Controllable One-Step Diffusion for Image Super-Resolution  [[PDF](https://arxiv.org/abs/2512.14061),[Page](https://github.com/Chanson94/CODSR)] \n\n[arxiv 2026.01] MEANCACHE: FROM INSTANTANEOUS TO AVERAGEVELOCITY FOR ACCELERATING FLOW MATCHING IN-FERENCE  [[PDF](https://arxiv.org/pdf/2601.19961)]\n\n[arxiv 2026.01] One-step Latent-free Image Generation with Pixel Mean Flows  [[PDF](https://arxiv.org/abs/2601.22158)]\n\n[arxiv 2026.01] Token Pruning for In-Context Generation in Diffusion Transformers  [[PDF](https://arxiv.org/pdf/2602.01609)]\n\n[arxiv 2026.02] ArcFlow: Unleashing 2-Step Text-to-Image Generation via High-Precision Non-Linear Flow Distillation  [[PDF](https://arxiv.org/abs/2602.09014),[Page](https://github.com/pnotp/ArcFlow)] ![Code](https://img.shields.io/github/stars/pnotp/ArcFlow?style=social&label=Star)\n\n[arxiv 2026.03]  BinaryAttention: One-Bit QK-Attention for Vision and Diffusion Transformers [[PDF](https://arxiv.org/abs/2603.09582),[Page](https://github.com/EdwardChasel/BinaryAttention)] ![Code](https://img.shields.io/github/stars/EdwardChasel/BinaryAttention?style=social&label=Star)\n\n[arxiv 2026.03] Unlearning for One-Step Generative Models via Unbalanced Optimal Transport  [[[PDF](https://arxiv.org/abs/2603.16489)]]\n\n[arxiv 2026.03] CIAR: Interval-based Collaborative Decoding for Image Generation Acceleration  [[PDF](https://arxiv.org/abs/2603.25463)]\n\n[arxiv 2026.03] BiFM: Bidirectional Flow Matching for Few-Step Image Editing and Generation  [[PDF](https://arxiv.org/abs/2603.24942)]\n\n[arxiv 2026.03]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)\n\n# end of speed\n\n\n\n## consistency model \n[arxiv 2024.10] Simplifying, stabilizing, and scaling continuous-time consistency models  [[PDF](https://arxiv.org/abs/2410.11081),[Page](https://openai.com/index/simplifying-stabilizing-and-scaling-continuous-time-consistency-models/)]\n\n[arxiv 2023.06]Latent Consistency Models: Synthesizing High-Resolution Images with Few-Step Inference [[PDF](https://arxiv.org/abs/2310.04378)]\n\n[arxiv 2024.10]  Stable Consistency Tuning: Understanding and Improving Consistency Models [[PDF](https://arxiv.org/abs/2410.18958),[Page](https://github.com/G-U-N/Stable-Consistency-Tuning)]\n\n[arxiv 2024.10] Truncated Consistency Models  [[PDF](https://arxiv.org/abs/2410.14895),[Page](https://truncated-cm.github.io/)]\n\n[arxiv 2026.03]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)\n\n## limited data \n[arxiv 2023.06]Decompose and Realign: Tackling Condition Misalignment in Text-to-Image Diffusion Models [[PDF](https://arxiv.org/abs/2306.14153)]\n\n\n## Study \n[CVPR 2023]Diffusion Art or Digital Forgery? Investigating Data Replication in Diffusion Models [[PDF](https://openaccess.thecvf.com/content/CVPR2023/papers/Somepalli_Diffusion_Art_or_Digital_Forgery_Investigating_Data_Replication_in_Diffusion_CVPR_2023_paper.pdf)]\n\n[arxiv 2023.06]Understanding and Mitigating Copying in Diffusion Models [[PDF](https://arxiv.org/abs/2305.20086), [code](https://github.com/somepago/DCR)]\n\n[arxiv 2023.06]Intriguing Properties of Text-guided Diffusion Models [[PDF](https://arxiv.org/pdf/2306.00974.pdf)]\n\n[arxiv 2023.06]Stable Diffusion is Untable [[PDF](https://arxiv.org/abs/2306.02583)]\n\n[arxiv 2023.06]A Geometric Perspective on Diffusion Models [[PDF])(https://arxiv.org/abs/2305.19947)]\n\n[arxiv 2023.06]Emergent Correspondence from Image Diffusion [[PDF](https://arxiv.org/abs/2306.03881)]\n\n[arxiv 2023.06]Evaluating Data Attribution for Text-to-Image Models [[PDf](https://arxiv.org/abs/2306.09345), [Page](https://yossigandelsman.github.io/rosetta_neurons/)]\n\n[arxiv 2023.06]Norm-guided latent space exploration for text-to-image generation [[PDF](https://arxiv.org/abs/2306.08687)]\n\n[arxiv 2023.06]Training-free Diffusion Model Adaptation for Variable-Sized Text-to-Image Synthesis [[PDF](https://arxiv.org/abs/2306.08645)]\n\n[arxiv 2023.07]On the Cultural Gap in Text-to-Image Generation [[PDF](https://arxiv.org/abs/2307.02971)]\n\n[arxiv 2023.07]How to Detect Unauthorized Data Usages in Text-to-image Diffusion Models [[PDF](https://arxiv.org/abs/2307.03108)]\n\n[arxiv  2023.08]Manipulating Embeddings of Stable Diffusion Prompts [[PDF](https://arxiv.org/pdf/2308.12059.pdf)]\n\n[arxiv 2023.10]Text-image Alignment for Diffusion-based Perception [[PDF](https://arxiv.org/abs/2310.00031),[Page](https://www.vision.caltech.edu/tadp/)]\n\n[arxiv 2023.10]What Does Stable Diffusion Know about the 3D Scene? [[PDF](https://arxiv.org/abs/2310.06836)]\n\n[arxiv 2023.11]Holistic Evaluation of Text-To-Image Models [[PDF](https://arxiv.org/abs/2311.04287)]\n\n[arxiv 2023.11]On the Limitation of Diffusion Models for Synthesizing Training Datasets [[PDF](https://arxiv.org/abs/2311.13090)]\n\n[arxiv 2023.12]Rich Human Feedback for Text-to-Image Generation [[PDF](https://arxiv.org/abs/2312.10240)]\n\n[arxiv 2024.1]Resolution Chromatography of Diffusion Models [[PDF](https://arxiv.org/abs/2401.10247)]\n\n[arxiv 0224.04]Bigger is not Always Better: Scaling Properties of Latent Diffusion Models [[PDF(https://arxiv.org/abs/2404.01367)]\n\n[arxiv 2024.08]  Not Every Image is Worth a Thousand Words: Quantifying Originality in Stable Diffusion [[PDF](https://arxiv.org/abs/2408.08184)]\n\n[arxiv 2024.08]GradBias: Unveiling Word Influence on Bias in Text-to-Image Generative Models\n [[PDF](), [Page]()]\n\n[arxiv 2024.10] Magnet: We Never Know How Text-to-Image Diffusion Models Work, Until We Learn How Vision-Language Models Function  [[PDF](https://arxiv.org/abs/2409.19967),[Page](https://github.com/I2-Multimedia-Lab/Magnet)]\n\n[arxiv 2024.10]  Revisit Large-Scale Image-Caption Data in Pre-training Multimodal Foundation Models [[PDF](https://arxiv.org/abs/2410.02740)]\n\n[arxiv 2024.10] Scaling Laws For Diffusion Transformers[[PDF](https://arxiv.org/abs/2410.08184)]\n\n[arxiv 2024.11]  Diffusion Models as Cartoonists! The Curious Case of High Density Regions [[PDF](https://arxiv.org/pdf/2411.01293)]\n\n[arxiv 2024.12] DMin: Scalable Training Data Influence Estimation for Diffusion Models  [[PDF](https://arxiv.org/abs/2412.08637）\n\n[arxiv 2025.02]  ConceptAttention: Diffusion Transformers Learn Highly Interpretable Features [[PDF](https://arxiv.org/pdf/2502.04320)]\n\n[arxiv 2025.04] Analysis of Attention in Video Diffusion Transformers  [[PDF](https://arxiv.org/abs/2504.10317),[Page](https://seasoned-draw-b97.notion.site/Analysis-of-Attention-in-Video-Diffusion-Transformers-1aea04ac6ca780c2b6b2cf6ed87e311f)] \n\n[arxiv 2025.09] Does FLUX Already Know How to Perform Physically Plausible Image Composition?  [[PDF](https://arxiv.org/abs/2509.21278)]\n\n[arxiv 2025.10]  How Diffusion Models Memorize [[PDF](https://arxiv.org/abs/2509.25705)]\n\n[arxiv 2025.10] LayerSync: Self-aligning Intermediate Layers  [[PDF](https://arxiv.org/abs/2510.12581),[Page](https://github.com/vita-epfl/LayerSync)] ![Code](https://img.shields.io/github/stars/vita-epfl/LayerSync?style=social&label=Star)\n\n[arxiv 2025.12] Is Nano Banana Pro a Low-Level Vision All-Rounder? A Comprehensive Evaluation on 14 Tasks and 40 Datasets  [[PDF](https://arxiv.org/abs/2512.15110),[Page](https://lowlevelbanana.github.io/)] ![Code](https://img.shields.io/github/stars/zplusdragon/LowLevelBanana?style=social&label=Star)\n\n[arxiv 2026.03] CRAFT: Aligning Diffusion Models with Fine-Tuning Is Easier Than You Think  [[PDF](https://arxiv.org/abs/2603.18991)]\n\n\n[arxiv 2026.03]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)\n\n## Evaluation \n\n[arxiv 2024.1]Rethinking FID: Towards a Better Evaluation Metric for Image Generation [[PDF](https://arxiv.org/abs/2401.09603)]\n\n[arxiv 2024.04]Evaluating Text-to-Visual Generation with Image-to-Text Generation [[PDF](https://arxiv.org/pdf/2404.01291.pdf),[Page](https://linzhiqiu.github.io/papers/vqascore/)]\n\n[arxiv 2024.04]Who Evaluates the Evaluations? Objectively Scoring Text-to-Image Prompt Coherence Metrics with T2IScoreScore (TS2) [[PDF](https://arxiv.org/abs/2404.04251)]\n\n\n[arxiv 2024.05]FAIntbench: A Holistic and Precise Benchmark for Bias Evaluation in Text-to-Image Models [[PDF](https://arxiv.org/abs/2405.17814)]\n\n[arxiv 2024.06]GAIA: Rethinking Action Quality Assessment for AI-Generated Videos[[PDF](https://arxiv.org/abs/2406.06087)]\n\n[arxiv 2024.06]Words Worth a Thousand Pictures: Measuring and Understanding Perceptual Variability in Text-to-Image Generation [[PDF](https://arxiv.org/abs/2406.08482)]\n\n\n[arxiv 2024.06]PhyBench: A Physical Commonsense Benchmark for Evaluating Text-to-Image Models [[PDF](https://arxiv.org/abs/2406.11802)]\n\n[arxiv 2024.06]Holistic Evaluation for Interleaved Text-and-Image Generation [[PDF](https://arxiv.org/abs/2406.14643), ]\n\n[arxiv 2024.08]E-Bench: Subjective-Aligned Benchmark Suite for Text-Driven Video Editing Quality Assessment [[PDF](https://arxiv.org/abs/2408.11481)]\n\n[arxiv 2024.10] GRADE: Quantifying Sample Diversity in Text-to-Image Models  [[PDF](https://arxiv.org/abs/2410.22592),[Page](https://royira.github.io/GRADE/)]\n\n[arxiv 2024.11] TypeScore: A Text Fidelity Metric for Text-to-Image Generative Models  [[PDF](https://arxiv.org/abs/2411.02437)]\n\n[arxiv 2024.11] Image2Text2Image: A Novel Framework for Label-Free Evaluation of Image-to-Text Generation with Text-to-Image Diffusion Models  [[PDF](https://arxiv.org/abs/2411.05706)]\n\n[arxiv 2024.11] Evaluating the Generation of Spatial Relations in Text and Image Generative Models  [[PDF](https://arxiv.org/abs/2411.07664)]\n\n[arxiv 2024.12] T2I-FactualBench: Benchmarking the Factuality of Text-to-Image Models with Knowledge-Intensive Concepts  [[PDF](https://arxiv.org/abs/2412.04300)]\n\n[arxiv 2024.12]  BodyMetric: Evaluating the Realism of Human Bodies in Text-to-Image Generation [[PDF](https://arxiv.org/pdf/2412.04086)]\n\n[arxiv 2025.01] IE-Bench: Advancing the Measurement of Text-Driven Image Editing for Human Perception Alignment  [[PDF](https://arxiv.org/abs/2501.09927)]\n\n[arxiv 2025.01] IMAGINE-E: Image Generation Intelligence Evaluation of State-of-the-art Text-to-Image Models  [[PDF](https://arxiv.org/abs/2501.13920),[Page](https://github.com/jylei16/Imagine-e)] ![Code](https://img.shields.io/github/stars/jylei16/Imagine-e?style=social&label=Star)\n\n[arxiv 2025.03]  Q-Eval-100K: Evaluating Visual Quality and Alignment Level for Text-to-Vision Content [[PDF](https://arxiv.org/pdf/2503.02357),[Page](https://github.com/zzc-1998/Q-Eval)] ![Code](https://img.shields.io/github/stars/zzc-1998/Q-Eval?style=social&label=Star)\n\n[arxiv 2025.03] GRADEO: Towards Human-Like Evaluation for Text-to-Video Generation via Multi-Step Reasoning  [[PDF](https://arxiv.org/pdf/2503.02341)]\n\n[arxiv 2025.03]  WISE: A World Knowledge-Informed Semantic Evaluation for Text-to-Image Generation [[PDF](https://arxiv.org/abs/2503.07265),[Page](https://github.com/PKU-YuanGroup/WISE)] ![Code](https://img.shields.io/github/stars/PKU-YuanGroup/WISE?style=social&label=Star)\n\n[arxiv 2025.03]  Q-Insight: Understanding Image Quality via Visual Reinforcement Learning [[PDF](https://arxiv.org/abs/2503.22679),[Page](https://github.com/lwq20020127/Q-Insight)] ![Code](https://img.shields.io/github/stars/lwq20020127/Q-Insight?style=social&label=Star)\n\n[arxiv 2025.04] HumanAesExpert: Advancing a Multi-Modality Foundation Model for Human Image Aesthetic Assessment  [[PDF](https://arxiv.org/abs/2503.23907),[Page](https://humanaesexpert.github.io/HumanAesExpert/)] ![Code](https://img.shields.io/github/stars/HumanAesExpert/HumanAesExpert?style=social&label=Star)\n\n[arxiv 2025.04] GPT-ImgEval: A Comprehensive Benchmark for Diagnosing GPT4o in Image Generation  [[PDF](https://arxiv.org/abs/2504.02782),[Page](https://github.com/PicoTrex/GPT-ImgEval)] ![Code](https://img.shields.io/github/stars/PicoTrex/GPT-ImgEval?style=social&label=Star)\n\n[arxiv 2025.04] RefVNLI: Towards Scalable Evaluation of Subject-driven Text-to-image Generation  [[PDF](https://arxiv.org/abs/2504.17502)]\n\n[arxiv 2025.05] WorldGenBench: A World-Knowledge-Integrated Benchmark for Reasoning-Driven Text-to-Image Generation  [[PDF](https://arxiv.org/abs/2505.01490),[Page](https://dwanzhang-ai.github.io/WorldGenBench/)] \n\n[arxiv 2025.05] T2I-ConBench: Text-to-Image Benchmark for Continual Post-training  [[PDF](https://arxiv.org/abs/2505.16875),[Page](https://k1nght.github.io/T2I-ConBench/)] ![Code](https://img.shields.io/github/stars/K1nght/T2I-ConBench?style=social&label=Star)\n\n[arxiv 2025.05]  CineTechBench: A Benchmark for Cinematographic Technique Understanding and Generation [[PDF](http://arxiv.org/abs/2505.15145),[Page](https://github.com/PRIS-CV/CineTechBench)] ![Code](https://img.shields.io/github/stars/PRIS-CV/CineTechBench?style=social&label=Star)\n\n[arxiv 2025.05] VisualQuality-R1: Reasoning-Induced Image Quality Assessment via Reinforcement Learning to Rank  [[PDF](https://arxiv.org/abs/2505.14460),[Page](https://github.com/TianheWu/VisualQuality-R1)] ![Code](https://img.shields.io/github/stars/TianheWu/VisualQuality-R1?style=social&label=Star)\n\n[arxiv 2025.05] VTBench: Evaluating Visual Tokenizers for Autoregressive Image Generation  [[PDF](https://arxiv.org/pdf/2505.13439),[Page](https://github.com/huawei-lin/VTBench)] ![Code](https://img.shields.io/github/stars/huawei-lin/VTBench?style=social&label=Star)\n\n[arxiv 2025.08] T2I-ReasonBench: Benchmarking Reasoning-Informed Text-to-Image Generation  [[PDF](https://arxiv.org/abs/2508.17472),[Page](https://github.com/KaiyueSun98/T2I-ReasonBench)] ![Code](https://img.shields.io/github/stars/KaiyueSun98/T2I-ReasonBench?style=social&label=Star)\n\n[arxiv 2025.09] MagicMirror: A Large-Scale Dataset and Benchmark for Fine-Grained Artifacts Assessment in Text-to-Image Generation  [[PDF](https://arxiv.org/abs/2509.10260),[Page](https://wj-inf.github.io/MagicMirror-page/)] ![Code](https://img.shields.io/github/stars/wj-inf/MagicMirror?style=social&label=Star)\n\n[arxiv 2025.09] GenExam: A Multidisciplinary Text-to-Image Exam  [[PDF](https://arxiv.org/abs/2509.14232),[Page](https://github.com/OpenGVLab/GenExam)] ![Code](https://img.shields.io/github/stars/OpenGVLab/GenExam?style=social&label=Star)\n\n[arxiv 2025.10] Factuality Matters: When Image Generation and Editing Meet Structured Visuals  [[PDF](https://arxiv.org/abs/2510.05091),[Page](https://structvisuals.github.io/)] ![Code](https://img.shields.io/github/stars/zhuole1025/Structured-Visuals?style=social&label=Star)\n\n[arxiv 2025.10] Pico-Banana-400K: A Large-Scale Dataset for Text-Guided Image Editing\n  [[PDF](https://arxiv.org/abs/2510.19808),[Page](https://github.com/apple/pico-banana-400k)] ![Code](https://img.shields.io/github/stars/apple/pico-banana-400k?style=social&label=Star)\n\n[arxiv 2025.11]  UniREditBench: A Unified Reasoning-based Image Editing Benchmark [[PDF](https://arxiv.org/abs/2511.01295),[Page](https://maplebb.github.io/UniREditBench/)] ![Code](https://img.shields.io/github/stars/Maplebb/UniREditBench?style=social&label=Star)\n\n[arxiv 2025.11]  When Visualizing is the First Step to Reasoning: MIRA, a Benchmark for Visual Chain-of-Thought [[PDF](https://arxiv.org/abs/2511.02779),[Page](https://mira-benchmark.github.io/)] \n\n[arxiv 2025.11] Benchmarking Diversity in Image Generation via Attribute-Conditional Human Evaluation  [[PDF](https://arxiv.org/pdf/2511.10547)]\n\n[arxiv 2026.01]  Everything in Its Place: Benchmarking Spatial Intelligence of Text-to-Image Models [[PDF](https://arxiv.org/abs/2601.20354),[Page](https://github.com/AMAP-ML/SpatialGenEval)] ![Code](https://img.shields.io/github/stars/AMAP-ML/SpatialGenEval?style=social&label=Star)\n\n[arxiv 2026.01] UEval: A Benchmark for Unified Multimodal Generation  [[PDF](https://arxiv.org/pdf/2601.22155),[Page](https://zlab-princeton.github.io/UEval/)] ![Code](https://img.shields.io/github/stars/zlab-princeton/UEval?style=social&label=Star)\n\n[arxiv 2026.01]  GenArena: How Can We Achieve Human-Aligned Evaluation for Visual Generation Tasks? [[PDF](https://arxiv.org/abs/2602.06013),[Page](https://genarena.github.io/)] ![Code](https://img.shields.io/github/stars/ruihanglix/genarena?style=social&label=Star)\n\n[arxiv 2026.01] RISE-Video: Can Video Generators Decode Implicit World Rules?  [[PDF](https://arxiv.org/abs/2602.05986),[Page](https://github.com/VisionXLab/RISE-Video)] ![Code](https://img.shields.io/github/stars/VisionXLab/RISE-Video?style=social&label=Star)\n\n[arxiv 2026.03] Omni IIE Bench: Benchmarking the Practical Capabilities of Image Editing Models  [[PDF](https://arxiv.org/abs/2603.16944)]\n\n\n[arxiv 2026.03]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)\n\n\n## Feedback\n[arxiv 2023.11]Using Human Feedback to Fine-tune Diffusion Models without Any Reward Model [[PDF](https://arxiv.org/abs/2311.13231)]\n\n[arxiv 2024.03]AGFSync: Leveraging AI-Generated Feedback for Preference Optimization in Text-to-Image Generation [[PDF](https://arxiv.org/abs/2403.13352)]\n\n[arxiv 2024.04]RL for Consistency Models: Faster Reward Guided Text-to-Image Generation [[PDF](https://arxiv.org/abs/2404.03673)]\n\n[arxiv 2024.04]Aligning Diffusion Models by Optimizing Human Utility [[PDF](https://arxiv.org/abs/2404.04465)]\n\n[arxiv 2024.07]FDS: Feedback-guided Domain Synthesis with Multi-Source Conditional Diffusion Models for Domain Generalization [[PDF](https://arxiv.org/abs/2407.03588), [Page](https://github.com/Mehrdad-Noori/FDS)]\n\n[arxiv 2024.08]Towards Reliable Advertising Image Generation Using Human Feedback [[PDF](https://arxiv.org/abs/2408.00418)]\n\n[arxiv 2024.08] I2EBench: A Comprehensive Benchmark for Instruction-based Image Editing[[PDF](https://arxiv.org/abs/2408.14180), [Page](https://github.com/cocoshe/I2EBench)]\n\n[arxiv 2024.10]  IterComp: Iterative Composition-Aware Feedback Learning from Model Gallery for Text-to-Image Generation [[PDF](https://arxiv.org/abs/2410.07171),[Page](https://github.com/YangLing0818/IterComp)]\n\n[arxiv 2024.10] Scalable Ranked Preference Optimization for Text-to-Image Generation  [[PDF](https://arxiv.org/abs/2410.18013),[Page](https://snap-research.github.io/RankDPO/)]\n\n[arxiv 2024.10] PrefPaint: Aligning Image Inpainting Diffusion Model with Human Preference  [[PDF](https://arxiv.org/abs/2410.21966),[Page](https://prefpaint.github.io/)]\n\n[arxiv 2024.11] Reward Incremental Learning in Text-to-Image Generation  [[PDF](https://arxiv.org/abs/2411.17310)] \n\n[arxiv 2024.12] Bridging the Gap: Aligning Text-to-Image Diffusion Models with Specific Feedback  [[PDF](https://arxiv.org/abs/2412.00122),[Page](https://github.com/kingniu0329/Visions)] ![Code](https://img.shields.io/github/stars/kingniu0329/Visions?style=social&label=Star)\n\n[arxiv 2025.01]  VisionReward: Fine-Grained Multi-Dimensional Human Preference Learning for Image and Video Generation [[PDF](https://arxiv.org/abs/2412.21059),[Page](https://github.com/THUDM/VisionReward)] ![Code](https://img.shields.io/github/stars/THUDM/VisionReward?style=social&label=Star)\n\n[arxiv 2025.02] Calibrated Multi-Preference Optimization for Aligning Diffusion Models  [[PDF](https://arxiv.org/pdf/2502.02588) ]\n\n[arxiv 2025.02]  Diffusion Model as a Noise-Aware Latent Reward Model for Step-Level Preference Optimization [[PDF](https://arxiv.org/abs/2502.01051),[Page](https://github.com/casiatao/LPO)] ![Code](https://img.shields.io/github/stars/casiatao/LPO?style=social&label=Star)\n\n[arxiv 2025.02] Hummingbird: High Fidelity Image Generation via Multimodal Context Alignment  [[PDF](https://arxiv.org/abs/2502.05153),[Page](https://roar-ai.github.io/hummingbird/)] \n\n[arxiv 2025.02]  Preference Alignment on Diffusion Model: A Comprehensive Survey for Image Generation and Editing [[PDF](https://arxiv.org/pdf/2502.07829)]\n\n[arxiv 2025.02] Generating on Generated: An Approach Towards Self-Evolving Diffusion Models  [[PDF](https://arxiv.org/pdf/2502.09963)]\n\n[arxiv 2025.02]  Diffusion-Sharpening: Fine-tuning Diffusion Models with Denoising Trajectory Sharpening [[PDF](https://arxiv.org/pdf/2502.12146),[Page](https://github.com/Gen-Verse/Diffusion-Sharpening)] ![Code](https://img.shields.io/github/stars/Gen-Verse/Diffusion-Sharpening?style=social&label=Star)\n\n[arxiv 2025.02] Learning to Sample Effective and Diverse Prompts for Text-to-Image Generation  [[PDF](https://arxiv.org/pdf/2502.11477)]\n\n[arxiv 2025.02]  InterFeedback: Unveiling Interactive Intelligence of Large Multimodal Models via Human Feedback [[PDF](https://arxiv.org/abs/2502.15027)]\n\n[arxiv 2025.03]  Rewards Are Enough for Fast Photo-Realistic Text-to-image Generation [[PDF](https://arxiv.org/abs/2503.13070),[Page](https://github.com/Luo-Yihong/R0)] ![Code](https://img.shields.io/github/stars/Luo-Yihong/R0?style=social&label=Star)\n\n[arxiv 2025.04] InstructEngine: Instruction-driven Text-to-Image Alignment  [[PDF](https://arxiv.org/pdf/2504.10329)]\n\n[arxiv 2025.04]  Reasoning Physical Video Generation with Diffusion Timestep Tokens via Reinforcement Learning [[PDF](https://arxiv.org/pdf/2504.15932)]\n\n[arxiv 2025.04] SUDO: Enhancing Text-to-Image Diffusion Models with Self-Supervised Direct Preference Optimization  [[PDF](https://arxiv.org/abs/2504.14534),[Page](https://github.com/SPengLiang/SUDO)] ![Code](https://img.shields.io/github/stars/SPengLiang/SUDO?style=social&label=Star)\n\n[arxiv 2025.05]  Flow-GRPO: Training Flow Matching Models via Online RL [[PDF](https://arxiv.org/abs/2505.05470),[Page](https://github.com/yifan123/flow_grpo)] ![Code](https://img.shields.io/github/stars/yifan123/flow_grpo?style=social&label=Star)\n\n[arxiv 2025.05] VARD: Efficient and Dense Fine-Tuning for Diffusion Models with Value-based RL  [[PDF](https://arxiv.org/abs/2505.15791)]\n\n[arxiv 2025.05] Diffusion-NPO: Negative Preference Optimization for Better Preference Aligned Generation of Diffusion Models [[PDF](https://arxiv.org/abs/2505.11245),[Page](https://github.com/G-U-N/Diffusion-NPO)] ![Code](https://img.shields.io/github/stars/G-U-N/Diffusion-NPO?style=social&label=Star)\n\n[arxiv 2025.06] Evolutionary Caching to Accelerate Your Off-the-Shelf Diffusion Model  [[PDF](https://arxiv.org/abs/2506.15682),[Page](https://github.com/aniaggarwal/ecad)] ![Code](https://img.shields.io/github/stars/aniaggarwal/ecad?style=social&label=Star)\n\n[arxiv 2025.06] PrefPaint: Enhancing Image Inpainting through Expert Human Feedback  [[PDF](https://arxiv.org/abs/2506.21834)]\n\n[arxiv 2025.07] Inversion-DPO: Precise and Efficient Post-Training for Diffusion Models  [[PDF](https://arxiv.org/abs/2507.11554),[Page](https://github.com/MIGHTYEZ/Inversion-DPO)] ![Code](https://img.shields.io/github/stars/MIGHTYEZ/Inversion-DPO?style=social&label=Star)\n\n[arxiv 2025.07] Enhancing Reward Models for High-quality Image Generation: Beyond Text-Image Alignment [[PDF](https://arxiv.org/abs/2507.19002),[Page](https://github.com/BarretBa/ICTHP)] ![Code](https://img.shields.io/github/stars/BarretBa/ICTHP?style=social&label=Star)\n\n[arxiv 2025.07]  Multimodal LLMs as Customized Reward Models for Text-to-Image Generation [[PDF](https://arxiv.org/abs/2507.21391),[Page](https://github.com/sjz5202/LLaVA-Reward)] ![Code](https://img.shields.io/github/stars/sjz5202/LLaVA-Reward?style=social&label=Star)\n\n[arxiv 2025.07]  MixGRPO: Unlocking Flow-based GRPO Efficiency with Mixed ODE-SDE [[PDF](https://arxiv.org/abs/2507.21802)]\n\n[arxiv 2025.08] TempFlow-GRPO: When Timing Matters for GRPO in Flow Models  [[PDF](https://arxiv.org/abs/2508.04324)]\n\n[arxiv 2025.08] Learning User Preferences for Image Generation Models  [[PDF](https://arxiv.org/abs/2508.08220),[Page](https://github.com/Mowenyii/learn-user-pref)] ![Code](https://img.shields.io/github/stars/Mowenyii/learn-user-pref?style=social&label=Star)\n\n[arxiv 2025.08] Instant Preference Alignment for Text-to-Image Diffusion Models  [[PDF](https://arxiv.org/pdf/2508.17718)]\n\n[arxiv 2025.08]  OneReward: Unified Mask-Guided Image Generation via Multi-Task Human Preference Learning [[PDF](https://arxiv.org/abs/2508.21066),[Page](https://one-reward.github.io/)] \n\n[arxiv 2025.08] Pref-GRPO: Pairwise Preference Reward-based GRPO for Stable Text-to-Image Reinforcement Learning  [[PDF](https://arxiv.org/pdf/2508.20751),[Page](https://codegoat24.github.io/UnifiedReward/Pref-GRPO)] ![Code](https://img.shields.io/github/stars/CodeGoat24/Pref-GRPO?style=social&label=Star)\n\n[arxiv 2025.09] BranchGRPO: Stable and Efficient GRPO with Structured Branching in Diffusion Models  [[PDF](https://arxiv.org/abs/2509.06040)]\n\n[arxiv 2025.10] Free Lunch Alignment of Text-to-Image Diffusion Models without Preference Image Pairs  [[PDF](https://arxiv.org/abs/2509.25771),[Page](https://github.com/DSL-Lab/T2I-Free-Lunch-Alignment)] ![Code](https://img.shields.io/github/stars/DSL-Lab/T2I-Free-Lunch-Alignment?style=social&label=Star)\n\n[arxiv 2025.10] Importance Sampling for Multi-Negative Multimodal Direct Preference Optimization  [[PDF](https://arxiv.org/abs/2509.25717)]\n\n[arxiv 2025.10] G2RPO: Granular GRPO for Precise Reward in Flow Models  [[PDF](https://arxiv.org/abs/2510.01982),[Page](https://github.com/bcmi/Granular-GRPO)] ![Code](https://img.shields.io/github/stars/bcmi/Granular-GRPO?style=social&label=Star)\n\n[arxiv 2025.10]  OmniQuality-R: Advancing Reward Models through All-Encompassing Quality Assessment [[PDF](https://arxiv.org/abs/2510.10609),[Page](https://github.com/yeppp27/OmniQuality-R)] ![Code](https://img.shields.io/github/stars/yeppp27/OmniQuality-R?style=social&label=Star)\n\n[arxiv 2025.10] Sample By Step, Optimize By Chunk: Chunk-Level GRPO For Text-to-Image Generation  [[PDF](https://arxiv.org/abs/2510.21583)]\n\n[arxiv 2025.10] GRPO-Guard: Mitigating Implicit Over-Optimization in Flow Matching via Regulated Clipping  [[PDF](https://arxiv.org/abs/2510.22319),[Page](https://jingw193.github.io/GRPO-Guard/#)] \n\n[arxiv 2025.10]  Enhancing Diffusion-based Restoration Models via Difficulty-Adaptive Reinforcement Learning with IQA Reward [[PDF](https://arxiv.org/abs/2511.01645)]\n\n[arxiv 2025.11] Reg-DPO: SFT-Regularized Direct Preference Optimization with GT-Pair for Improving Video Generation  [[PDF](https://arxiv.org/pdf/2511.01450)]\n\n[arxiv 2025.11]  Neighbor GRPO: Contrastive ODE Policy Optimization Aligns Flow Models [[PDF](https://arxiv.org/pdf/2511.16955)]\n\n[arxiv 2025.11] Seeing What Matters: Visual Preference Policy Optimization for Visual Generation  [[PDF](https://arxiv.org/pdf/2511.18719)]\n\n[arxiv 2025.11] RubricRL: Simple Generalizable Rewards for Text-to-Image Generation  [[PDF](https://arxiv.org/pdf/2511.20651)]\n\n[arxiv 2025.12] PC-GRPO: Puzzle Curriculum GRPO for Vision-Centric Reasoning  [[PDF](https://arxiv.org/abs/2512.14944),[Page](https://pcgrpo.github.io/)] \n\n[arxiv 2025.12] DiverseGRPO: Mitigating Mode Collapse in Image Generation via Diversity-Aware GRPO  [[PDF](https://arxiv.org/abs/2512.21514),[Page](https://henglin-liu.github.io/DiverseGRPO/)] \n\n[arxiv 2026.01] Unified Personalized Reward Model for Vision Generation  [[PDF](https://arxiv.org/abs/2602.02380),[Page](https://codegoat24.github.io/UnifiedReward/flex)] ![Code](https://img.shields.io/github/stars/CodeGoat24/Pref-GRPO?style=social&label=Star)\n\n[arxiv 2026.01]  PromptRL: Prompt Matters in RL for Flow-Based Image Generation [[PDF](https://arxiv.org/abs/2602.01382),[Page](https://github.com/G-U-N/UniRL)] ![Code](https://img.shields.io/github/stars/G-U-N/UniRL?style=social&label=Star)\n\n[arxiv 2026.03] Diffusion Reinforcement Learning via Centered Reward Distillation  [[PDF](https://arxiv.org/abs/2603.14128)]\n\n[arxiv 2026.03] UniGRPO: Unified Policy Optimization for Reasoning-Driven Visual Generation  [[PDF](https://arxiv.org/pdf/2603.23500)]\n\n[arxiv 2026.03] Self-Corrected Image Generation with Explainable Latent Rewards  [[PDF](https://arxiv.org/abs/2603.24965),[Page](https://yinyiluo.github.io/xLARD/)]\n\n\n[arxiv 2026.03]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)\n\n\n## GPT4o evaluation \n[arxiv 2025.05] Preliminary Explorations with GPT-4o(mni) Native Image Generation  [[PDF](https://arxiv.org/pdf/2505.05501)]\n\n[arxiv 2025.05] A Preliminary Study for GPT-4o on Image Restoration  [[PDF](https://arxiv.org/abs/2505.05621)]\n\n[arxiv 2025.10] EditReward: A Human-Aligned Reward Model for Instruction-Guided Image Editing  [[PDF](https://arxiv.org/abs/2509.26346),[Page](https://tiger-ai-lab.github.io/EditReward/)] ![Code](https://img.shields.io/github/stars/TIGER-AI-Lab/EditReward?style=social&label=Star)\n\n\n[arxiv 2026.03]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)\n\n\n##  Finetuning \n[arxiv 2021.07] Low-rank Adaptation for Fast Text-to-Image Diffusion Fine-tuning [[PDF](https://arxiv.org/pdf/2106.09685.pdf), [code](https://github.com/cloneofsimo/lora)]\n\n[arxiv 2023.02]DoRA: Weight-Decomposed Low-Rank Adaptation [[PDF](https://arxiv.org/pdf/2402.09353.pdf)]\n\n[arxiv 2024.06]Spectrum-Aware Parameter Efficient Fine-Tuning for Diffusion Models [[PDF](https://arxiv.org/abs/2405.21050)]\n\n\n## Related \n\n[arxiv 2022.04]VQGAN-CLIP: Open Domain Image Generation and Editing with Natural Language Guidance [[PDF](https://arxiv.org/abs/2204.08583), [Code](https://github.com/EleutherAI/vqgan-clip/tree/main/notebooks)]\n\n*\n\n[arxiv 2022.11]Investigating Prompt Engineering in Diffusion Models \\[[PDF](https://arxiv.org/pdf/2211.15462.pdf)\\] \n\n[arxiv 2022.11]Versatile Diffusion: Text, Images and Variations All in One Diffusion Model \\[[PDF](https://arxiv.org/pdf/2211.08332.pdf)\\] \n\n[arxiv 2022.11; ByteDance]Shifted Diffusion for Text-to-image Generation  \\[[PDF](https://arxiv.org/pdf/2211.15388.pdf)\\] \n\n[arxiv 2022.11; ]3DDesigner: Towards Photorealistic 3D Object Generation and Editing with Text-guided Diffusion Models  \\[[PDF](https://arxiv.org/pdf/2211.14108.pdf)\\] \n\n**[ECCV 2022; Best Paper]** ***Partial Distance:*** On the Versatile Uses of Partial Distance Correlation in Deep Learning. \\[[PDF](https://arxiv.org/abs/2207.09684)\\]  \n[arxiv 2022.12]SinDDM: A Single Image Denoising Diffusion Model \\[[PDF](https://arxiv.org/pdf/2211.16582.pdf)\\] \n\n[arxiv 2022.12] Diffusion Guided Domain Adaptation of Image Generators \\[[PDF](https://arxiv.org/pdf/2212.04473.pdf)\\] \n\n[arxiv 2022.12]Training-Free Structured Diffusion Guidance for Compositional Text-to-Image Synthesis [[PDF](https://arxiv.org/pdf/2212.05032.pdf)]\n\n[arxiv 2022.12]Scalable Diffusion Models with Transformers[[PDF](https://arxiv.org/pdf/2212.09748.pdf)]\n\n[arxiv 2022.12] Generalized Decoding for Pixel, Image, and Language [[PDF](https://arxiv.org/pdf/2212.11270.pdf), [Page](https://github.com/microsoft/X-Decoder)]\n\n[arxiv 2023.03]Visual ChatGPT: Talking, Drawing and Editing with Visual Foundation Models [[PDF](https://arxiv.org/abs/2303.04671)]\n\n[arxiv 2023.03]Open-Vocabulary Panoptic Segmentation with Text-to-Image Diffusion Models [[PDF](https://arxiv.org/abs/2303.04803), [Page](https://jerryxu.net/ODISE/)]\n\n[arxiv 2023.03]Larger language models do in-context learning differently [[PDF](https://arxiv.org/abs/2303.03846)]\n\n[arxiv 2023.03]One Transformer Fits All Distributions in Multi-Modal Diffusion at Scale [[PDF](https://arxiv.org/pdf/2303.06285.pdf)]\n\n[arxiv 2023.03]Align, Adapt and Inject: Sound-guided Unified Image Generation [[PDF](https://arxiv.org/pdf/2306.11504.pdf)]\n\n[arxiv 2023.11]ToddlerDiffusion: Flash Interpretable Controllable Diffusion Model [[PDF](https://arxiv.org/abs/2311.14542)]\n\n\n[arxiv 2024.04]Many-to-many Image Generation with Auto-regressive Diffusion Models [[PDF](https://arxiv.org/abs/2404.03109)]\n\n[arxiv 2024.04]On the Scalability of Diffusion-based Text-to-Image Generation [[PDF](https://arxiv.org/abs/2404.02883)]\n\n\n[arxiv 2024.06]Frozen CLIP: A Strong Backbone for Weakly Supervised Semantic Segmentation [[PDF](https://arxiv.org/abs/2406.11189)]\n\n\n[arxiv 2024.06]Diffusion Models in Low-Level Vision: A Survey [[PDF](https://arxiv.org/abs/2406.11138)]\n\n\n[arxiv 2024.06]A Survey of Multimodal-Guided Image Editing with Text-to-Image Diffusion Models [[PDF](https://arxiv.org/abs/2406.14555), [Page](https://github.com/xinchengshuai/Awesome-Image-Editing)]\n\n[arxiv 2024.07] Replication in Visual Diffusion Models:A Survey and Outlook[[PDF](https://arxiv.org/pdf/2408.00001), [Page](https://github.com/WangWenhao0716/Awesome-Diffusion-Replication)]\n\n[arxiv 2024.09]A Survey of Multimodal Composite Editing and Retrieval [[PDF](https://arxiv.org/abs/2409.05405)]\n\n[arxiv 2024.09]Pushing Joint Image Denoising and Classification to the Edge [[PDF](https://arxiv.org/abs/2409.08943)]\n\n[arxiv 2024.09] Alignment of Diffusion Models: Fundamentals, Challenges, and Future[[PDF](https://arxiv.org/abs/2409.07253)]\n\n[arxiv 2024.09]Taming Diffusion Models for Image Restoration: A Review [[PDF](https://arxiv.org/abs/2409.10353)]\n\n[arxiv 2025.01] An Empirical Study of Autoregressive Pre-training from Videos  [[PDF](https://arxiv.org/pdf/2501.05453),[Page](https://brjathu.github.io/toto/)] \n\n[arxiv 2025.11]  SAM 3: Segment Anything with Concepts [[PDF](https://arxiv.org/pdf/2511.16719),[Page](https://github.com/facebookresearch/sam3)] ![Code](https://img.shields.io/github/stars/facebookresearch/sam3?style=social&label=Star)\n\n\n[arxiv 2026.03]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)\n\n\n# architecture /distribution\n[arxiv 2024.09]HydraViT: Stacking Heads for a Scalable ViT [[PDF](https://arxiv.org/abs/2409.17978), [Page](https://github.com/ds-kiel/HydraViT)]\n\n[arxiv 2024.10] MaskMamba: A Hybrid Mamba-Transformer Model for Masked Image Generation[[PDF](https://arxiv.org/pdf/2409.19937)]\n\n[arxiv 2024.10] Dynamic Diffusion Transformer  [[PDF](https://arxiv.org/abs/2410.03456),[Page](https://github.com/NUS-HPC-AI-Lab/Dynamic-Diffusion-Transformer)]\n\n[arxiv 2024.10]  DART: Denoising Autoregressive Transformer for Scalable Text-to-Image Generation [[PDF](https://arxiv.org/abs/2410.08159)]\n\n[arxiv 2024.10] Efficient High-Resolution Image Synthesis with Linear Diffusion Transformer  [[PDF](https://arxiv.org/abs/2410.10629),[Page](https://nvlabs.github.io/Sana/)]\n\n[arxiv 2024.10] GlobalMamba: Global Image Serialization for Vision Mamba  [[PDF](https://arxiv.org/abs/2410.10316),[Page](https://github.com/wangck20/GlobalMamba)]\n\n[arxiv 2024.10] MoH: Multi-Head Attention as Mixture-of-Head Attention[[PDF](https://arxiv.org/abs/2410.11842),[Page](https://github.com/SkyworkAI/MoH)]\n\n[arxiv 2024.11] Edify Image: High-Quality Image Generation with Pixel Space Laplacian Diffusion Models  [[PDF](https://arxiv.org/abs/2411.07126),[Page](https://research.nvidia.com/labs/dir/edify-image/)]\n\n[arxiv 2024.12] Nested Diffusion Models Using Hierarchical Latent Priors  [[PDF](https://arxiv.org/abs/2412.05984)]\n\n[arxiv 2024.12] SnapGen: Taming High-Resolution Text-to-Image Models for Mobile Devices with Efficient Architectures and Training  [[PDF](),[Page](https://snap-research.github.io/snapgen/)] \n\n[arxiv 2025.01]  Decentralized Diffusion Models [[PDF](https://arxiv.org/abs/2501.05450),[Page](https://decentralizeddiffusion.github.io/)] \n\n[arxiv 2025.01]  ARFlow: Autogressive Flow with Hybrid Linear Attention [[PDF](https://arxiv.org/abs/2501.16085)]\n\n[arxiv 2025.02]  Spiking Vision Transformer with Saccadic Attention [[PDF](https://arxiv.org/abs/2502.12677)]\n\n[arxiv 2025.03]  Transformers without Normalization [[PDF](https://arxiv.org/abs/2503.10622),[Page](https://jiachenzhu.github.io/DyT/)] ![Code](https://img.shields.io/github/stars/jiachenzhu/DyT?style=social&label=Star)\n\n[arxiv 2025.03]  DiffMoE: Dynamic Token Selection for Scalable Diffusion Transformers [[PDF](https://arxiv.org/abs/2503.14487),[Page](https://shiml20.github.io/DiffMoE/)] ![Code](https://img.shields.io/github/stars/KwaiVGI/DiffMoE?style=social&label=Star)\n\n[arxiv 2025.03]  UniDisc: Unified Multimodal Discrete Diffusion [[PDF](https://arxiv.org/abs/2503.20853),[Page](https://unidisc.github.io/)] ![Code](https://img.shields.io/github/stars/alexanderswerdlow/unidisc?style=social&label=Star)\n\n[arxiv 2025.04]  DDT: Decoupled Diffusion Transformer [[PDF](https://arxiv.org/abs/2504.05741),[Page](https://github.com/MCG-NJU/DDT)] ![Code](https://img.shields.io/github/stars/MCG-NJU/DDT?style=social&label=Star)\n\n[arxiv 2025.10] BLIP3o-NEXT: Next Frontier of Native Image Generation  [[PDF](https://arxiv.org/abs/2510.15857),[Page](https://github.com/JiuhaiChen/BLIP3o)] ![Code](https://img.shields.io/github/stars/JiuhaiChen/BLIP3o?style=social&label=Star)\n\n[arxiv 2025.10]  Latent Diffusion Model without Variational Autoencoder [[PDF](https://arxiv.org/abs/2510.15301),[Page](https://github.com/shiml20/SVG)] ![Code](https://img.shields.io/github/stars/shiml20/SVG?style=social&label=Star)\n\n[arxiv 2025.10] Routing Matters in MoE: Scaling Diffusion Transformers with Explicit Routing Guidance  [[PDF](https://arxiv.org/abs/2510.24711)]\n\n[arxiv 2025.12] Markovian Scale Prediction: A New Era of Visual Autoregressive Generation  [[PDF](https://arxiv.org/abs/2511.23334),[Page](https://luokairo.github.io/markov-var-page/)] ![Code](https://img.shields.io/github/stars/luokairo/Markov-VAR?style=social&label=Star)\n\n[arxiv 2025.12] Improved Mean Flows: On the Challenges of Fastforward Generative Models  [[PDF](https://arxiv.org/abs/2512.02012)]\n\n[arxiv 2025.12]  Loom: Diffusion-Transformer for Interleaved Generation [[PDF](https://arxiv.org/pdf/2512.18254),[Page](https://github.com/Plantian/Loom)] ![Code](https://img.shields.io/github/stars/Plantian/Loom?style=social&label=Star)\n\n[arxiv 2025.12]  SemanticGen: Video Generation in Semantic Space [[PDF](https://arxiv.org/abs/2512.20619),[Page](https://jianhongbai.github.io/SemanticGen/)] \n\n[arxiv 2026.01] LINA: Linear Autoregressive Image Generative Models with Continuous Tokens  [[PDF](https://arxiv.org/abs/2601.22630),[Page](https://github.com/techmonsterwang/LINA)] ![Code](https://img.shields.io/github/stars/techmonsterwang/LINA?style=social&label=Star)\n\n[arxiv 2026.01]  PixelGen: Pixel Diffusion Beats Latent Diffusion with Perceptual Loss [[PDF](https://arxiv.org/abs/2602.02493),[Page](https://zehong-ma.github.io/PixelGen/)] ![Code](https://img.shields.io/github/stars/Zehong-Ma/PixelGen?style=social&label=Star)\n\n[arxiv 2026.03] Cubic Discrete Diffusion: Discrete Visual Generation on High-Dimensional Representation Tokens  [[PDF](https://arxiv.org/abs/2603.19232)] ![Code](https://img.shields.io/github/stars/YuqingWang1029/CubiD?style=social&label=Star)\n\n\n[arxiv 2026.03]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)\n\n\n\n\n## Data \n[arxiv 2024.06]What If We Recaption Billions of Web Images with LLaMA-3? [[PDF](https://arxiv.org/abs/2406.08478), [Page](https://www.haqtu.me/Recap-Datacomp-1B/)]\n\n[arxiv 2025.07] GPT-IMAGE-EDIT-1.5M: A Million-Scale, GPT-Generated Image Dataset [[PDF](https://arxiv.org/abs/2507.21033),[Page](https://ucsc-vlaa.github.io/GPT-Image-Edit/)] ![Code](https://img.shields.io/github/stars/wyhlovecpp/GPT-Image-Edit?style=social&label=Star)\n\n[arxiv 2026.03] EditHF-1M: A Million-Scale Rich Human Preference Feedback for Image Editing  [[PDF](https://arxiv.org/abs/2603.14916),[Page](https://github.com/IntMeGroup/EditHF)] ![Code](https://img.shields.io/github/stars/IntMeGroup/EditHF?style=social&label=Star)\n\n\n\n## Repository\n***DIFFUSERS*** Hugging-face sota repository. \\[[DIFFUSERS](https://github.com/huggingface/diffusers)\\]\n\n[arxiv 2023.03]Text-to-image Diffusion Model in Generative AI: A Survey [[PDF](https://arxiv.org/abs/2303.07909)]\n\n[arxiv 2023.04]Synthesizing Anyone, Anywhere, in Any Pose[[PDF](https://arxiv.org/abs/2304.03164)]\n\n# real-to-cg\n[arxiv 2024.09]Synergy and Synchrony in Couple Dances [[PDF](https://arxiv.org/abs/2409.04440), [Page](https://von31.github.io/synNsync/)]\n\n[arxiv 2026.03]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)\n\n\n## basics \n[arxiv 2024.12] Flow Matching Guide and Code  [[PDF](https://arxiv.org/abs/2412.06264),[Page](https://github.com/facebookresearch/flow_matching)] ![Code](https://img.shields.io/github/stars/facebookresearch/flow_matching?style=social&label=Star)\n\n[arxiv 2025.02] On the Guidance of Flow Matching  [[PDF](https://arxiv.org/abs/2502.02150),[Page](https://github.com/AI4Science-WestlakeU/flow_guidance)] ![Code](https://img.shields.io/github/stars/AI4Science-WestlakeU/flow_guidance?style=social&label=Star)\n\n\n[arxiv 2026.03]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)\n\n\n"
  },
  {
    "path": "Multi-modality Generation.md",
    "content": "### [Awesome-Multimodal-Large-Language-Models](https://github.com/BradyFU/Awesome-Multimodal-Large-Language-Models)\n\n\n## Dataset \n\n**MultiVerse**\n[arxiv 2025.10]  MultiVerse: A Multi-Turn Conversation Benchmark for Evaluating Large Vision and Language Models [[PDF](https://arxiv.org/abs/2510.16641),[Page](https://passing2961.github.io/multiverse-project-page/)] \n\n\n###  LLM \n\n[arxiv 2024.12] Training Large Language Models to Reason in a Continuous Latent Space  [[PDF](https://arxiv.org/pdf/2412.06769)]\n\n\n[arxiv 2026.03]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star) \n\n\n\n## Feedback \n[arxiv 2025.02] DAMO: Data- and Model-aware Alignment of Multi-modal LLMs  [[PDF](https://arxiv.org/abs/2502.01943),[Page](https://github.com/injadlu/DAMO)] ![Code](https://img.shields.io/github/stars/injadlu/DAMO?style=social&label=Star) \n\n[arxiv 2025.02]  MM-RLHF: The Next Step Forward in Multimodal LLM Alignment [[PDF](https://arxiv.org/abs/2502.10391),[Page](https://mm-rlhf.github.io/)] ![Code](https://img.shields.io/github/stars/Kwai-YuanQi/MM-RLHF?style=social&label=Star) \n\n[arxiv 2025.02] RE-ALIGN: Aligning Vision Language Models via Retrieval-Augmented Direct Preference Optimization  [[PDF](https://arxiv.org/abs/2502.13146),[Page](https://github.com/taco-group/Re-Align)] ![Code](https://img.shields.io/github/stars/taco-group/Re-Align?style=social&label=Star) \n\n[arxiv 2025.02] OmniAlign-V: Towards Enhanced Alignment of MLLMs with Human Preference  [[PDF](https://arxiv.org/abs/2502.18411),[Page](https://github.com/PhoenixZ810/OmniAlign-V)] ![Code](https://img.shields.io/github/stars/PhoenixZ810/OmniAlign-V?style=social&label=Star) \n\n[arxiv 2025.03] Aligning Multimodal LLM with Human Preference: A Survey [[PDF](https://arxiv.org/abs/2503.14504),[Page](https://github.com/BradyFU/Awesome-Multimodal-Large-Language-Models/tree/Alignment)] ![Code](https://img.shields.io/github/stars/BradyFU/Awesome-Multimodal-Large-Language-Models/tree/Alignment?style=social&label=Star) \n\n[arxiv 2025.05] Unified Multimodal Chain-of-Thought Reward Model through Reinforcement Fine-Tuning  [[PDF](https://arxiv.org/abs/2505.03318),[Page](https://codegoat24.github.io/UnifiedReward/think)] ![Code](https://img.shields.io/github/stars/CodeGoat24/UnifiedReward?style=social&label=Star) \n\n[arxiv 2025.05] Skywork-VL Reward: An Effective Reward Model for Multimodal Understanding and Reasoning  [[PDF](https://arxiv.org/abs/2505.07263),[Page](https://huggingface.co/Skywork/Skywork-VL-Reward-7B)] \n\n[arxiv 2025.05]  One RL to See Them All: Visual Triple Unified Reinforcement Learning [[PDF](https://arxiv.org/abs/2505.18129),[Page](https://github.com/MiniMax-AI/One-RL-to-See-Them-All)] ![Code](https://img.shields.io/github/stars/MiniMax-AI/One-RL-to-See-Them-All?style=social&label=Star) \n\n[arxiv 2025.06] LeanPO: Lean Preference Optimization for Likelihood Alignment in Video-LLMs  [[PDF](https://arxiv.org/abs/2506.05260),[Page](https://github.com/Wang-Xiaodong1899/LeanPO)] ![Code](https://img.shields.io/github/stars/Wang-Xiaodong1899/LeanPO?style=social&label=Star) \n\n[arxiv 2025.06]  Omni-DPO: A Dual-Perspective Paradigm for Dynamic Preference Learning of LLMs [[PDF](https://arxiv.org/abs/2506.10054)] ![Code](https://img.shields.io/github/stars/pspdada/Omni-DPO?style=social&label=Star) \n\n[arxiv 2025.10]  NoisyGRPO: Incentivizing Multimodal CoT Reasoning via Noise Injection and Bayesian Estimation [[PDF](https://arxiv.org/abs/2510.21122),[Page](https://artanic30.github.io/project_pages/NoisyGRPO/)] ![Code](https://img.shields.io/github/stars/Artanic30/NoisyGRPO?style=social&label=Star) \n\n\n[arxiv 2026.03]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star) \n\n\n\n## Agent\n\n[arxiv 2024.10] Agent S: An Open Agentic Framework that Uses Computers Like a Human[[PDF](https://arxiv.org/abs/2410.08164), [Page](https://github.com/simular-ai/Agent-S)]\n\n[arxiv 2024.12]  SPAgent: Adaptive Task Decomposition and Model Selection for General Video Generation and Editing [[PDF](https://arxiv.org/abs/2411.18983)]\n\n[arxiv 2024.12] TeamCraft: A Benchmark for Multi-Modal Multi-Agent Systems in Minecraft  [[PDF](https://arxiv.org/abs/2412.05255),[Page](https://teamcraft-bench.github.io/)] ![Code](https://img.shields.io/github/stars/teamcraft-bench/teamcraft?style=social&label=Star) \n\n[arxiv 2025.01] UnrealZoo: Enriching Photo-realistic Virtual Worlds for Embodied AI  [[PDF](https://arxiv.org/abs/2412.20977),[Page](http://unrealzoo.site/)] ![Code](https://img.shields.io/github/stars/UnrealZoo/unrealzoo-gym?style=social&label=Star) \n\n[arxiv 2025.02] Magma: A Foundation Model for Multimodal AI Agents  [[PDF](https://www.arxiv.org/pdf/2502.13130),[Page](https://microsoft.github.io/Magma/)] ![Code](https://img.shields.io/github/stars/microsoft/Magma?style=social&label=Star) \n\n[arxiv 2025.02] PC-Agent: A Hierarchical Multi-Agent Collaboration Framework for Complex Task Automation on PC  [[PDF](https://arxiv.org/pdf/2502.14282)] \n\n[arxiv 2025.03]  STEVE: AStep Verification Pipeline for Computer-use Agent Training [[PDF](https://arxiv.org/abs/2503.12532),[Page](https://github.com/FanbinLu/STEVE-R1)] ![Code](https://img.shields.io/github/stars/FanbinLu/STEVE-R1?style=social&label=Star) \n\n[arxiv 2025.08]  TurboTrain: Towards Efficient and Balanced Multi-Task Learning for Multi-Agent Perception and Prediction [[PDF](https://arxiv.org/abs/2508.04682),[Page](https://github.com/ucla-mobility/TurboTrain)] ![Code](https://img.shields.io/github/stars/ucla-mobility/TurboTrain?style=social&label=Star) \n\n[arxiv 2025.10]  MAT-Agent: Adaptive Multi-Agent Training Optimization [[PDF](https://arxiv.org/abs/2510.17845)]\n\n\n[arxiv 2026.03]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star) \n\n\n## Understanding \n\n[arxiv 2023.07]Generative Pretraining in Multimodality [[PDF](https://arxiv.org/abs/2307.05222),[Page](https://github.com/baaivision/Emu)]\n\n[arxiv 2023.07]Generating Images with Multimodal Language Models [[PDF](https://arxiv.org/abs/2305.17216),[Page](https://jykoh.com/gill)]\n\n[arxiv 2023.07]3D-LLM: Injecting the 3D World into Large Language Models[[PDF] (https://arxiv.org/abs/2307.12981),[Page](https://vis-www.cs.umass.edu/3dllm/)]\n\n[arxiv 2023.10]Making LLaMA SEE and Draw with SEED Tokenizer [[PDF](https://arxiv.org/abs/2310.01218),[Page](https://github.com/AILab-CVC/SEED)]\n\n[arxiv 2023.10]Iterative Self-Refinement with GPT-4V(ision) for Automatic Image Design and Generation [[PDf](https://arxiv.org/abs/2310.08541),[Page](https://idea2img.github.io/)]\n\n[arxiv 2023.12]CoDi-2: In-Context, Interleaved, and Interactive Any-to-Any Generation[[PDF](https://arxiv.org/abs/2311.18775),[Page](https://codi-2.github.io/)]\n\n[arxiv 2023.12]Massively Multimodal Masked Modeling [[PDF](https://arxiv.org/abs/2312.06647),[Page](https://4m.epfl.ch/)]\n\n[arxiv 2023.12]Gemini: A Family of Highly Capable Multimodal Models [[PDF](https://arxiv.org/abs/2312.11805)]\n\n[arxiv 2023.12]Generative Multimodal Models are In-Context Learners [[PDF](https://arxiv.org/abs/2312.13286),[Page](https://baaivision.github.io/emu2)]\n\n[arxiv 2024.03]LLaVA-PruMerge: Adaptive Token Reduction for Efficient Large Multimodal Models [[PDF](https://arxiv.org/abs/2403.15388),[Page](https://llava-prumerge.github.io/)]\n\n[arxiv 2024.06]The Evolution of Multimodal Model Architectures [[PDF](https://arxiv.org/pdf/2405.17927)]\n\n[arxiv 2024.09]EMOVA: Empowering Language Models to See, Hear and Speak with Vivid Emotions[[PDF](https://arxiv.org/abs/2409.18042), [Page](https://emova-ollm.github.io/)]\n\n[arxiv 2024.09] Molmo and PixMo: Open Weights and Open Data for State-of-the-Art Multimodal Models[[PDF](https://arxiv.org/abs/2409.17146),)]\n\n[arxiv 2024.09] Visual Prompting in Multimodal Large Language Models: A Survey[[PDF](https://arxiv.org/abs/2409.15310)]\n\n[arxiv 2024.10]Baichuan-Omni Technical Report [[PDF](https://arxiv.org/abs/2410.08565), [Page](https://github.com/westlake-baichuan-mllm/bc-omni)]\n\n[arxiv 2024.10]TemporalBench Benchmarking Fine-grained Temporal Understanding for Multimodal Video Models [[PDF](https://arxiv.org/abs/2410.10818), [Page](https://temporalbench.github.io/)]\n\n[arxiv 2024.10] γ−MoD: Exploring Mixture-of-Depth Adaptation for Multimodal Large Language Models[[PDF](https://arxiv.org/abs/2410.13859), [Page](https://github.com/Yaxin9Luo/Gamma-MOD)]\n\n[arxiv 2024.10]Remember, Retrieve and Generate: Understanding Infinite Visual Concepts as Your Personalized Assistant [[PDF](https://arxiv.org/abs/2410.13360), [Page](https://github.com/Hoar012/RAP-MLLM)]\n\n[arxiv 2024.11] Multimodal Alignment and Fusion: A Survey [[PDF](https://arxiv.org/abs/2411.17040)]\n\n[arxiv 2024.11] LLaVA-o1: Let Vision Language Models Reason Step-by-Step[[PDF](https://arxiv.org/abs/2411.10440), [Page]()]\n\n[arxiv 2024.11] CATCH: Complementary Adaptive Token-level Contrastive Decoding to Mitigate Hallucinations in LVLMs[[PDF](https://arxiv.org/abs/2411.12713)]\n\n[arxiv 2024.12]  VisionZip: Longer is Better but Not Necessary in Vision Language Models [[PDF](https://arxiv.org/abs/2412.04467),[Page](https://github.com/dvlab-research/VisionZip)] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star) \n\n[arxiv 2024.12]  NVILA: Efficient Frontier Visual Language Models [[PDF](https://arxiv.org/abs/2412.04468)] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star) \n\n[arxiv 2024.12]  Expanding Performance Boundaries of Open-Source Multimodal Models with Model, Data, and Test-Time Scaling [[PDF](https://arxiv.org/abs/2412.05271),[Page](https://internvl.github.io/blog/)] ![Code](https://img.shields.io/github/stars//OpenGVLab/InternVL?style=social&label=Star) \n\n[arxiv 2024.12]  Thinking in Space: How Multimodal Large Language Models See, Remember, and Recall Spaces [[PDF](https://arxiv.org/abs/2412.14171),[Page](https://vision-x-nyu.github.io/thinking-in-space.github.io/)] ![Code](https://img.shields.io/github/stars/vision-x-nyu/thinking-in-space?style=social&label=Star) \n\n[arxiv 2024.12] OpenEMMA: Open-Source Multimodal Model for End-to-End Autonomous Driving  [[PDF](https://arxiv.org/abs/2412.15208),[Page](https://github.com/taco-group/OpenEMMA)] ![Code](https://img.shields.io/github/stars/taco-group/OpenEMMA?style=social&label=Star) \n\n[arxiv 2024.12]  VLM-AD: End-to-End Autonomous Driving through Vision-Language Model Supervision [[PDF](https://arxiv.org/abs/2412.14446)]\n\n[arxiv 2024.12]  HoVLE: Unleashing the Power of Monolithic Vision-Language Models with Holistic Vision-Language Embedding [[PDF](https://arxiv.org/abs/2412.16158),[Page](https://huggingface.co/OpenGVLab/HoVLE)] \n\n[arxiv 2024.12] Multimodal Latent Language Modeling with Next-Token Diffusion  [[PDF](https://arxiv.org/abs/2412.08635),[Page](https://aka.ms/GeneralAI)] \n\n[arxiv 2024.12] InternLM-XComposer2.5-OmniLive: A Comprehensive Multimodal System for Long-term Streaming Video and Audio Interactions  [[PDF](https://arxiv.org/abs/2412.09596),[Page](https://github.com/InternLM/InternLM-XComposer/tree/main/InternLM-XComposer-2.5-OmniLive)] ![Code](https://img.shields.io/github/stars/InternLM/InternLM-XComposer/tree/main/InternLM-XComposer-2.5-OmniLive?style=social&label=Star) \n\n\n[arxiv 2025.01]  VideoRefer Suite: Advancing Spatial-Temporal Object Understanding with Video LLM [[PDF](http://arxiv.org/abs/2501.00599),[Page](https://damo-nlp-sg.github.io/VideoRefer/)] ![Code](https://img.shields.io/github/stars/DAMO-NLP-SG/VideoRefer?style=social&label=Star) \n\n[arxiv 2025.01] VideoChat-Flash: Hierarchical Compression for Long-Context Video Modeling  [[PDF](https://arxiv.org/abs/2501.00574),[Page](https://github.com/OpenGVLab/VideoChat-Flash)] ![Code](https://img.shields.io/github/stars/OpenGVLab/VideoChat-Flash?style=social&label=Star) \n\n[arxiv 2025.01] VITA-1.5: Towards GPT-4o Level Real-Time Vision and Speech Interaction  [[PDF](https://arxiv.org/abs/2501.01957),[Page](https://github.com/VITA-MLLM/VITA)] ![Code](https://img.shields.io/github/stars/VITA-MLLM/VITA?style=social&label=Star) \n\n[arxiv 2025.01] Virgo: A Preliminary Exploration on Reproducing o1-like MLLM  [[PDF](https://arxiv.org/abs/2501.01904),[Page](https://github.com/RUCAIBox/Virgo)] ![Code](https://img.shields.io/github/stars/RUCAIBox/Virgo?style=social&label=Star) \n\n[arxiv 2025.01]  Scaling of Search and Learning: A Roadmap to Reproduce o1from Reinforcement Learning Perspective [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star) \n\n[arxiv 2025.01]  Dispider: Enabling Video LLMs with Active Real-Time Interaction via Disentangled Perception, Decision, and Reaction [[PDF](https://arxiv.org/abs/2501.03218),[Page](https://github.com/Mark12Ding/Dispider)] ![Code](https://img.shields.io/github/stars/Mark12Ding/Dispider?style=social&label=Star) \n\n[arxiv 2025.01] LlamaV-o1: Rethinking Step-by-step Visual Reasoning in LLMs  [[PDF](https://arxiv.org/abs/2501.06186),[Page](https://mbzuai-oryx.github.io/LlamaV-o1/)] ![Code](https://img.shields.io/github/stars/mbzuai-oryx/LlamaV-o1?style=social&label=Star) \n\n[arxiv 2025.01] VideoLLaMA 3: Frontier Multimodal Foundation Models for Image and Video Understanding  [[PDF](https://arxiv.org/abs/2501.13106),[Page](https://github.com/DAMO-NLP-SG/VideoLLaMA3)] ![Code](https://img.shields.io/github/stars/DAMO-NLP-SG/VideoLLaMA3?style=social&label=Star) \n\n[arxiv 2025.01]  Next Token Prediction Towards Multimodal Intelligence[[PDF](https://arxiv.org/abs/2412.18619),[Page](https://github.com/LMM101/Awesome-Multimodal-Next-Token-Prediction)] ![Code](https://img.shields.io/github/stars/LMM101/Awesome-Multimodal-Next-Token-Prediction?style=social&label=Star) \n\n[arxiv 2025.01] MiniMax-01: Scaling Foundation Models with Lightning Attention  [[PDF](https://arxiv.org/abs/2501.08313),[Page](https://github.com/MiniMax-AI/MiniMax-01)] ![Code](https://img.shields.io/github/stars/MiniMax-AI/MiniMax-01?style=social&label=Star) \n\n[arxiv 2025.02] MINT: Mitigating Hallucinations in Large Vision-Language Models via Token Reduction  [[PDF](https://arxiv.org/abs/2502.00717)]\n\n[arxiv 2025.02] Long-VITA: Scaling Large Multi-modal Models to 1 Million Tokens with Leading Short-Context Accuray  [[PDF](https://arxiv.org/abs/2502.05177),[Page](https://github.com/VITA-MLLM/Long-VITA)] ![Code](https://img.shields.io/github/stars/VITA-MLLM/Long-VITA?style=social&label=Star) \n\n[arxiv 2025.02]  PixelWorld: Towards Perceiving Everything as Pixels [[PDF](https://arxiv.org/abs/2501.19339),[Page](https://tiger-ai-lab.github.io/PixelWorld/)] ![Code](https://img.shields.io/github/stars/TIGER-AI-Lab/PixelWorld?style=social&label=Star) \n\n[arxiv 2025.02] Ola: Pushing the Frontiers of Omni-Modal Language Model with Progressive Modality Alignment  [[PDF](https://arxiv.org/abs/2502.04328),[Page](https://ola-omni.github.io/)] ![Code](https://img.shields.io/github/stars/Ola-Omni/Ola?style=social&label=Star) \n\n[arxiv 2025.02] video-SALMONN-o1: Reasoning-enhanced Audio-visual Large Language Model  [[PDF](https://arxiv.org/pdf/2502.11775),[Page](https://github.com/BriansIDP/video-SALMONN-o1)] ![Code](https://img.shields.io/github/stars/BriansIDP/video-SALMONN-o1?style=social&label=Star) \n\n[arxiv 2025.02] Qwen2.5-VL Technical Report  [[PDF](https://arxiv.org/abs/2502.13923),[Page](https://github.com/QwenLM/Qwen2.5-VL)] ![Code](https://img.shields.io/github/stars/QwenLM/Qwen2.5-VL?style=social&label=Star) \n\n[arxiv 2025.02]  Introducing Visual Perception Token into Multimodal Large Language Model [[PDF](https://arxiv.org/abs/2502.17425),[Page](https://github.com/yu-rp/VisualPerceptionToken)] ![Code](https://img.shields.io/github/stars/yu-rp/VisualPerceptionToken?style=social&label=Star) \n\n[arxiv 2025.02] MLLMs Know Where to Look: Training-free Perception of Small Visual Details with Multimodal LLMs  [[PDF](https://arxiv.org/abs/2502.17422),[Page](https://github.com/saccharomycetes/mllms_know)] ![Code](https://img.shields.io/github/stars/saccharomycetes/mllms_know?style=social&label=Star) \n\n[arxiv 2025.03]  Visual-RFT: Visual Reinforcement Fine-Tuning [[PDF](https://arxiv.org/pdf/2503.01785),[Page](https://github.com/Liuziyu77/Visual-RFT)] ![Code](https://img.shields.io/github/stars/Liuziyu77/Visual-RFT?style=social&label=Star) \n\n[arxiv 2025.03]  NEXUS-O: AN OMNI-PERCEPTIVE AND -INTERACTIVE MODEL FOR LANGUAGE, AUDIO, AND VISION [[PDF](https://arxiv.org/pdf/2503.01879)]\n\n[arxiv 2025.03] Vision-R1: Incentivizing Reasoning Capability in Multimodal Large Language Models  [[PDF](https://arxiv.org/pdf/2503.06749),[Page](https://github.com/Osilly/Vision-R1)] ![Code](https://img.shields.io/github/stars/Osilly/Vision-R1?style=social&label=Star) \n\n[arxiv 2025.03] Qwen2.5-Omni Technical Report  [[PDF](https://arxiv.org/abs/2503.20215)]\n\n[arxiv 2025.04]  Shot-by-Shot: Film-Grammar-Aware Training-Free Audio Description Generation [[PDF](https://arxiv.org/abs/2504.01020),[Page](https://www.robots.ox.ac.uk/vgg/research/shot-by-shot/)] ![Code](https://img.shields.io/github/stars/Jyxarthur/shot-by-shot?style=social&label=Star) \n\n[arxiv 2025.04]  Aligned Better, Listen Better for Audio-Visual Large Language Models [[PDF](https://arxiv.org/abs/2504.02061),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star) \n\n[arxiv 2025.04] TokenFLEX: Unified VLM Training for Flexible Visual Tokens Inference  [[PDF](https://arxiv.org/abs/2504.03154)]\n\n[arxiv 2025.04]  Kimi-VL Technical Report [[PDF](https://arxiv.org/abs/2504.07491),[Page](https://github.com/MoonshotAI/Kimi-VL)] ![Code](https://img.shields.io/github/stars//MoonshotAI/Kimi-VL?style=social&label=Star) \n\n[arxiv 2025.04]  InternVL3: Exploring Advanced Training and Test-Time Recipes for Open-Source Multimodal Models [[PDF](https://arxiv.org/abs/2504.10479),[Page](https://github.com/OpenGVLab/InternVL)] ![Code](https://img.shields.io/github/stars/OpenGVLab/InternVL?style=social&label=Star) \n\n[arxiv 2025.04]  VistaDPO: Video Hierarchical Spatial-Temporal Direct Preference Optimization for Large Video Models [[PDF](https://arxiv.org/abs/2504.13122),[Page](https://github.com/HaroldChen19/VistaDPO)] ![Code](https://img.shields.io/github/stars/HaroldChen19/VistaDPO?style=social&label=Star) \n\n[arxiv 2025.04]  Eagle 2.5: Boosting Long-Context Post-Training for Frontier Vision-Language Models [[PDF](https://arxiv.org/pdf/2504.15271),[Page](https://nvlabs.github.io/EAGLE/)] ![Code](https://img.shields.io/github/stars/NVlabs/EAGLE?style=social&label=Star) \n\n[arxiv 2025.05]  Seed1.5-VL Technical Report [[PDF](https://arxiv.org/abs/2505.07062)]\n\n[arxiv 2025.06] HaploVL - A Single-Transformer Baseline for Multi-Modal Understanding  [[PDF](http://arxiv.org/abs/2503.14694),[Page](https://github.com/Tencent/HaploVLM)] ![Code](https://img.shields.io/github/stars/Tencent/HaploVLM?style=social&label=Star) \n\n[arxiv 2025.07] Kwai Keye-VL Technical Report  [[PDF](https://arxiv.org/abs/2507.01949),[Page](https://github.com/Kwai-Keye/Keye)] ![Code](https://img.shields.io/github/stars/Kwai-Keye/Keye?style=social&label=Star) \n\n[arxiv 2025.07]  Mono-InternVL-1.5: Towards Cheaper and Faster Monolithic Multimodal Large Language Models [[PDF](https://arxiv.org/abs/2507.12566),[Page](https://github.com/OpenGVLab/Mono-InternVL)] ![Code](https://img.shields.io/github/stars/OpenGVLab/Mono-InternVL?style=social&label=Star) \n\n[arxiv 2025.07] Meta CLIP 2: A Worldwide Scaling Recipe  [[PDF](https://arxiv.org/abs/2507.22062),[Page](https://github.com/facebookresearch/MetaCLIP)] ![Code](https://img.shields.io/github/stars/facebookresearch/MetaCLIP?style=social&label=Star) \n\n[arxiv 2025.08]  Ovis2.5 Technical Report [[PDF](https://arxiv.org/pdf/2508.11737),[Page](https://github.com/AIDC-AI/Ovis)] ![Code](https://img.shields.io/github/stars/AIDC-AI/Ovis?style=social&label=Star) \n\n[arxiv 2025.08] InternVL3.5: Advancing Open-Source Multimodal Models in Versatility, Reasoning, and Efficiency  [[PDF](https://arxiv.org/abs/2508.18265),[Page](https://github.com/OpenGVLab/InternVL)] ![Code](https://img.shields.io/github/stars/OpenGVLab/InternVL?style=social&label=Star) \n\n[arxiv 2025.09]  Kwai Keye-VL 1.5 Technical Report [[PDF](https://arxiv.org/abs/2509.01563),[Page](https://kwai-keye.github.io/)] \n\n[arxiv 2025.09] SAIL-VL2 Technical Report  [[PDF](https://arxiv.org/abs/2509.14033),[Page](https://github.com/BytedanceDouyinContent/SAIL-VL2)] ![Code](https://img.shields.io/github/stars/BytedanceDouyinContent/SAIL-VL2?style=social&label=Star) \n\n[arxiv 2025.09] MiniCPM-V 4.5: Cooking Efficient MLLMs via Architecture, Data, and Training Recipe  [[PDF](https://arxiv.org/abs/2509.18154),[Page](https://github.com/OpenBMB/MiniCPM-V)] ![Code](https://img.shields.io/github/stars/OpenBMB/MiniCPM-V?style=social&label=Star) \n\n[arxiv 2025.09] Qwen3-Omni Technical Report  [[PDF](https://arxiv.org/abs/2509.17765),[Page](https://github.com/QwenLM/Qwen3-Omni)] ![Code](https://img.shields.io/github/stars/QwenLM/Qwen3-Omni?style=social&label=Star) \n\n[arxiv 2025.10]  OmniVinci: Enhancing Architecture and Data for Omni-Modal Understanding LLM [[PDF](https://arxiv.org/abs/2510.15870),[Page](https://nvlabs.github.io/OmniVinci/)] ![Code](https://img.shields.io/github/stars/NVlabs/OmniVinci?style=social&label=Star) \n\n[arxiv 2025.11] NVIDIA Nemotron Nano V2 VL  [[PDF](https://arxiv.org/pdf/2511.03929)]\n\n[arxiv 2025.11]  Qwen3-VL Technical Report [[PDF](https://arxiv.org/pdf/2511.21631),[Page](https://github.com/QwenLM/Qwen3-VL)] ![Code](https://img.shields.io/github/stars/QwenLM/Qwen3-VL?style=social&label=Star) \n\n[arxiv 2025.12]  Xiaomi MiMo-VL-Miloco Technical Report [[PDF](https://arxiv.org/abs/2512.17436),[Page](https://github.com/XiaoMi/xiaomi-mimo-vl-miloco)] ![Code](https://img.shields.io/github/stars/XiaoMi/xiaomi-mimo-vl-miloco?style=social&label=Star) \n\n[arxiv 2026.01]  VideoAuto-R1: Video Auto Reasoning via Thinking Once, Answering Twice [[PDF](https://arxiv.org/abs/2601.05175),[Page](https://ivul-kaust.github.io/projects/videoauto-r1/)] ![Code](https://img.shields.io/github/stars/IVUL-KAUST/VideoAuto-R1/?style=social&label=Star) \n\n[arxiv 2026.01] STEP3-VL-10B Technical Report  [[PDF](https://arxiv.org/abs/2601.09668),[Page](https://stepfun-ai.github.io/Step3-VL-10B/)] ![Code](https://img.shields.io/github/stars/stepfun-ai/Step3-VL-10B?style=social&label=Star) \n\n[arxiv 2026.01]  SkyReels-V3 Technique Report [[PDF](https://arxiv.org/abs/2601.17323),[Page](https://github.com/SkyworkAI/SkyReels-V3)] ![Code](https://img.shields.io/github/stars/SkyworkAI/SkyReels-V3?style=social&label=Star) \n\n[arxiv 2026.01] Youtu-VL: Unleashing Visual Potential via Unified Vision-Language Supervision  [[PDF](https://arxiv.org/abs/2601.19798),[Page](https://github.com/TencentCloudADP/youtu-vl)] ![Code](https://img.shields.io/github/stars/TencentCloudADP/youtu-vl?style=social&label=Star) \n\n[arxiv 2026.03] Phi-4-reasoning-vision-15B Technical Report  [[PDF](https://arxiv.org/pdf/2603.03975),[Page](https://huggingface.co/microsoft/Phi-4-reasoning-vision-15B)] ![Code](https://img.shields.io/github/stars/microsoft/phi-4-reasoning-vision-15B?style=social&label=Star) \n\n[arxiv 2026.03]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star) \n\n\n## sound-video understanding \n[arxiv 2025.12] JavisGPT: A Unified Multi-modal LLM for Sounding-Video Comprehension and Generation  [[PDF](https://arxiv.org/abs/2512.22905),[Page](https://github.com/JavisVerse/JavisGPT)] ![Code](https://img.shields.io/github/stars/JavisVerse/JavisGPT?style=social&label=Star) \n\n[arxiv 2026.03]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star) \n\n\n## downstream\n[arxiv 2025.05]  Multi-SpatialMLLM: Multi-Frame Spatial Understanding with Multi-Modal Large Language Models [[PDF](https://arxiv.org/abs/2505.17015),[Page](https://runsenxu.com/projects/Multi-SpatialMLLM)] ![Code](https://img.shields.io/github/stars/facebookresearch/Multi-SpatialMLLM?style=social&label=Star) \n\n[arxiv 2025.09] AdsQA: Towards Advertisement Video Understanding  [[PDF](https://arxiv.org/pdf/2509.08621),[Page](https://github.com/TsinghuaC3I/AdsQA)] ![Code](https://img.shields.io/github/stars/TsinghuaC3I/AdsQA?style=social&label=Star) \n\n[arxiv 2025.11]  VidEmo: Affective-Tree Reasoning for Emotion-Centric Video Foundation Models [[PDF](https://arxiv.org/abs/2511.02712)]\n\n[arxiv 2026.03]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star) \n\n\n\n## Long Video Understanding\n[arxiv 2025.02] CoS: Chain-of-Shot Prompting for Long Video Understanding  [[PDF](https://arxiv.org/abs/2502.06428),[Page](https://lwpyh.github.io/CoS/)] ![Code](https://img.shields.io/github/stars/lwpyh/CoS_codes?style=social&label=Star) \n\n[arxiv 2025.03]  VideoMind: A Chain-of-LoRA Agent for Long Video Reasoning [[PDF](https://arxiv.org/abs/2503.13444),[Page](https://videomind.github.io/)] ![Code](https://img.shields.io/github/stars/yeliudev/VideoMind?style=social&label=Star) \n\n[arxiv 2025.04]  Chapter-Llama: Efficient Chaptering in Hour-Long Videos with LLMs [[PDF](https://arxiv.org/abs/2504.00072),[Page](https://imagine.enpc.fr/~lucas.ventura/chapter-llama/)] ![Code](https://img.shields.io/github/stars/lucas-ventura/chapter-llama/?style=social&label=Star) \n\n[arxiv 2025.04]  Slow-Fast Architecture for Video Multi-Modal Large Language Models [[PDF](https://arxiv.org/abs/2504.01328),[Page](https://github.com/SHI-Labs/Slow-Fast-Video-Multimodal-LLM)] ![Code](https://img.shields.io/github/stars/SHI-Labs/Slow-Fast-Video-Multimodal-LLM?style=social&label=Star) \n\n[arxiv 2025.04] TimeSearch: Hierarchical Video Search with Spotlight and Reflection for Human-like Long Video Understanding  [[PDF](https://arxiv.org/abs/2504.01407)]\n\n[arxiv 2025.04] Re-thinking Temporal Search for Long-Form Video Understanding  [[PDF](https://arxiv.org/abs/2504.02259)]\n\n[arxiv 2025.04] VideoAgent2: Enhancing the LLM-Based Agent System for Long-Form Video Understanding by Uncertainty-Aware CoT  [[PDF](https://arxiv.org/abs/2504.04471)]\n\n[arxiv 2025.04] Vidi: Large Multimodal Models for Video Understanding and Editing  [[PDF](https://arxiv.org/pdf/2504.15681),[Page](https://bytedance.github.io/vidi-website/)] ![Code](https://img.shields.io/github/stars/bytedance/vidi?style=social&label=Star) \n\n[arxiv 2025.05]  CrossLMM: Decoupling Long Video Sequences from LMMs via Dual Cross-Attention Mechanisms [[PDF](https://arxiv.org/abs/2505.17020),[Page](https://github.com/shilinyan99/CrossLMM)] ![Code](https://img.shields.io/github/stars/shilinyan99/CrossLMM?style=social&label=Star) \n\n[arxiv 2025.05]  QuickVideo: Real-Time Long Video Understanding with System Algorithm Co-Design[[PDF](https://arxiv.org/abs/2505.16175),[Page](https://github.com/TIGER-AI-Lab/QuickVideo)] ![Code](https://img.shields.io/github/stars/TIGER-AI-Lab/QuickVideo?style=social&label=Star) \n\n[arxiv 2025.05] VideoEval-Pro: Robust and Realistic Long Video Understanding Evaluation  [[PDF](https://arxiv.org/abs/2505.14640),[Page](https://tiger-ai-lab.github.io/VideoEval-Pro/home_page.html)] ![Code](https://img.shields.io/github/stars/TIGER-AI-Lab/VideoEval-Pro?style=social&label=Star) \n\n[arxiv 2025.05] Deep Video Discovery: Agentic Search with Tool Use for Long-form Video Understanding  [[PDF](https://arxiv.org/abs/2505.18079)]\n\n[arxiv 2025.06]  Movie Facts and Fibs (MF2): A Benchmark for Long Movie Understanding [[PDF](https://arxiv.org/abs/2506.06275)]\n\n[arxiv 2025.06] VRBench: A Benchmark for Multi-Step Reasoning in Long Narrative Videos  [[PDF](https://arxiv.org/abs/2506.10857),[Page](https://vrbench.github.io/)] ![Code](https://img.shields.io/github/stars/OpenGVLab/VRBench?style=social&label=Star) \n\n[arxiv 2025.06]  Ego-R1: Chain-of-Tool-Thought for Ultra-Long Egocentric Video Reasoning [[PDF](https://arxiv.org/abs/2506.13654),[Page](https://egolife-ai.github.io/Ego-R1/)] ![Code](https://img.shields.io/github/stars/egolife-ai/Ego-R1?style=social&label=Star) \n\n[arxiv 2025.06]  Task-Aware KV Compression For Cost-Effective Long Video Understanding [[PDF](https://arxiv.org/abs/2506.21184),[Page](https://github.com/UnableToUseGit/VideoX22L)] ![Code](https://img.shields.io/github/stars/UnableToUseGit/VideoX22L?style=social&label=Star) \n\n[arxiv 2025.06]  Video-XL-2: Towards Very Long-Video Understanding Through Task-Aware KV Sparsification [[PDF](https://arxiv.org/abs/2506.19225),[Page](https://unabletousegit.github.io/video-xl2.github.io/)] ![Code](https://img.shields.io/github/stars/VectorSpaceLab/Video-XL?style=social&label=Star) \n\n[arxiv 2025.07] Flash-VStream: Efficient Real-Time Understanding for Long Video Streams  [[PDF](https://arxiv.org/abs/2506.23825),[Page](https://github.com/IVGSZ/Flash-VStream)] ![Code](https://img.shields.io/github/stars/IVGSZ/Flash-VStream?style=social&label=Star) \n\n[arxiv 2025.07] ARC-Hunyuan-Video-7B: Structured Video Comprehension of Real-World Shorts  [[PDF](https://arxiv.org/abs/2507.20939),[Page](https://github.com/TencentARC/ARC-Hunyuan-Video-7B)] ![Code](https://img.shields.io/github/stars/TencentARC/ARC-Hunyuan-Video-7B?style=social&label=Star) \n\n[arxiv 2025.08] AVATAR: Reinforcement Learning to See, Hear, and Reason Over Video  [[PDF](https://arxiv.org/abs/2508.03100),[Page](https://github.com/yogkul2000/AVATAR)] ![Code](https://img.shields.io/github/stars/yogkul2000/AVATAR?style=social&label=Star) \n\n[arxiv 2025.08] Thinking With Videos: Multimodal Tool-Augmented Reinforcement Learning for Long Video Reasoning  [[PDF](https://arxiv.org/abs/2508.04416)]\n\n[arxiv 2025.08]  StreamMem: Query-Agnostic KV Cache Memory for Streaming Video Understanding [[PDF](https://arxiv.org/pdf/2508.15717),[Page](https://yangyanl.ai/streammem/)] \n\n[arxiv 2025.09] DATE: Dynamic Absolute Time Enhancement for Long Video Understanding  [[PDF](http://arxiv.org/abs/2509.09263),[Page](https://github.com/yuanc3/DATE)] ![Code](https://img.shields.io/github/stars/yuanc3/DATE?style=social&label=Star) \n\n[arxiv 2025.10]  Video-in-the-Loop: Span-Grounded Long Video QA with Interleaved Reasoning [[PDF](https://arxiv.org/abs/2510.04022)]\n\n[arxiv 2025.10] video-SALMONN S: Streaming Audio-Visual LLMs Beyond Length Limits via Memory  [[PDF](https://arxiv.org/pdf/2510.11129)]\n\n[arxiv 2025.10] Vgent: Graph-based Retrieval-Reasoning-Augmented Generation For Long Video Understanding  [[PDF](https://arxiv.org/abs/2510.14032),[Page](https://xiaoqian-shen.github.io/Vgent/)] ![Code](https://img.shields.io/github/stars/xiaoqian-shen/Vgent?style=social&label=Star) \n\n[arxiv 2025.11]  TimeSearch-R: Adaptive Temporal Search for Long-Form Video Understanding via Self-Verification Reinforcement Learning [[PDF](https://arxiv.org/abs/2511.05489),[Page](https://github.com/Time-Search/TimeSearch-R)] ![Code](https://img.shields.io/github/stars/Time-Search/TimeSearch-R?style=social&label=Star) \n\n[arxiv 2025.11]  Seeing the Forest and the Trees: Query-Aware Tokenizer for Long-Video Multimodal Language Models [[PDF](https://arxiv.org/abs/2511.11910),[Page](https://github.com/Siyou-Li/QTSplus)] ![Code](https://img.shields.io/github/stars/Siyou-Li/QTSplus?style=social&label=Star) \n\n[arxiv 2025.12] VideoZoomer: Reinforcement-Learned Temporal Focusing for Long Video Reasoning  [[PDF](https://arxiv.org/abs/2512.22315)]\n\n[arxiv 2026.03] Question-guided Visual Compression with Memory Feedback for Long-Term Video Understanding  [[PDF](https://arxiv.org/abs/2603.15167)]\n\n[arxiv 2026.03] A Multi-Agent Perception-Action Alliance for Efficient Long Video Reasoning  [[PDF](https://arxiv.org/abs/2603.14052),[Page](https://github.com/git-disl/A4VL)] ![Code](https://img.shields.io/github/stars/git-disl/A4VL?style=social&label=Star)\n\n[arxiv 2026.03] Symphony: A Cognitively-Inspired Multi-Agent System for Long-Video Understanding  [[PDF](https://arxiv.org/abs/2603.17304)]\n\n\n[arxiv 2026.03]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star) \n\n\n## Generation \n[arxiv 2023.12]SmartEdit: Exploring Complex Instruction-based Image Editing with Multimodal Large Language Models [[PDF](https://arxiv.org/abs/2312.06739),[Page](https://yuzhou914.github.io/SmartEdit/)]\n\n[arxiv 2023.12]InstructAny2Pix: Flexible Visual Editing via Multimodal Instruction Following [[PDF](https://arxiv.org/abs/2312.06738), [Page](https://github.com/jacklishufan/InstructAny2Pix.git)]\n\n[arxiv 2024.1]DiffusionGPT: LLM-Driven Text-to-Image Generation System [[PDF](https://arxiv.org/abs/2401.10061)]\n\n[arxiv 2024.1]Image Anything: Towards Reasoning-coherent and Training-free Multi-modal Image Generation [[PDF](https://arxiv.org/abs/2401.17664),[Page](https://vlislab22.github.io/ImageAnything/)]\n\n[arxiv 2024.03]3D-VLA 3D Vision-Language-Action Generative World Model[[PDF](https://arxiv.org/abs/2403.09631),[Page](https://vis-www.cs.umass.edu/3dvla/)]\n\n[arxiv 2024.04]SEED-X: Multimodal Models with Unified Multi-granularity Comprehension and Generation [[PDF](https://arxiv.org/abs/2404.14396)]\n\n[arxiv 2024.08] Show-o: One Single Transformer to Unify Multimodal Understanding and Generation[[PDF](https://arxiv.org/abs/2408.12528), [Page](https://github.com/showlab/Show-o)]\n\n[arxiv 2024.09] VILA-U: a Unified Foundation Model Integrating Visual Understanding and Generation[[PDF](https://arxiv.org/abs/2409.04429)]\n\n\n[arxiv 2024.09] Emu3: Next-Token Prediction is All You Need[[PDF](https://arxiv.org/abs/2409.18869), [Page](https://emu.baai.ac.cn/)]\n\n[arxiv 2024.09]MonoFormer: One Transformer for Both Diffusion and Autoregression [[PDF](https://arxiv.org/abs/2409.16280), [Page](https://monoformer.github.io/)]\n\n[arxiv 2024.10] PUMA: Empowering Unified MLLM with Multi-granular Visual Generation[[PDF](https://arxiv.org/abs/2410.13861), [Page](https://rongyaofang.github.io/puma/)]\n\n[arxiv 2024.10]ACE: All-round Creator and Editor Following Instructions via Diffusion Transformer [[PDF](https://arxiv.org/abs/2410.00086), [Page](https://ali-vilab.github.io/ace-page/)]\n\n[arxiv 2024.10]Janus: Decoupling Visual Encoding for Unified Multimodal Understanding and Generation [[PDF](https://arxiv.org/abs/2410.13848), [Page](https://github.com/deepseek-ai/Janus)]\n\n[arxiv 2024.11]Spider: Any-to-Many Multimodal LLM [[PDF](https://arxiv.org/abs/2411.09439)]\n\n[arxiv 2024.12] TokenFlow: Unified Image Tokenizer for Multimodal Understanding and Generation  [[PDF](https://arxiv.org/abs/2412.03069),[Page](https://byteflow-ai.github.io/TokenFlow/)] ![Code](https://img.shields.io/github/stars/ByteFlow-AI/TokenFlow?style=social&label=Star) \n\n[arxiv 2025.02]  HermesFlow: Seamlessly Closing the Gap in Multimodal Understanding and Generation [[PDF](https://arxiv.org/pdf/2502.12148),[Page](https://github.com/Gen-Verse/HermesFlow)] ![Code](https://img.shields.io/github/stars/Gen-Verse/HermesFlow?style=social&label=Star) \n\n[arxiv 2025.02]  UniTok: A Unified Tokenizer for Visual Generation and Understanding [[PDF](https://arxiv.org/abs/2502.20321),[Page](https://github.com/FoundationVision/UniTok)] ![Code](https://img.shields.io/github/stars/FoundationVision/UniTok?style=social&label=Star) \n\n[arxiv 2025.03]  WeGen: A Unified Model for Interactive Multimodal Generation as We Chat [[PDF](https://arxiv.org/pdf/2503.01115),[Page](https://github.com/hzphzp/WeGen)] ![Code](https://img.shields.io/github/stars/hzphzp/WeGen?style=social&label=Star) \n\n[arxiv 2025.03]  Unified Autoregressive Visual Generation and Understanding with Continuous Tokens [[PDF](https://arxiv.org/pdf/2503.13436)]\n\n[arxiv 2025.03]  GoT: Unleashing Reasoning Capability of Multimodal Large Language Model for Visual Generation and Editing [[PDF](https://arxiv.org/abs/2503.10639),[Page](https://github.com/rongyaofang/GoT)] ![Code](https://img.shields.io/github/stars/rongyaofang/GoT?style=social&label=Star) \n\n[arxiv 2025.03]  Harmonizing Visual Representations for Unified Multimodal Understanding and Generation [[PDF](https://arxiv.org/abs/2503.21979),[Page](https://github.com/wusize/Harmon)] ![Code](https://img.shields.io/github/stars/wusize/Harmon?style=social&label=Star) \n\n[arxiv 2025.04]  Transfer between Modalities with MetaQueries [[PDF](https://arxiv.org/abs/2504.06256),[Page](https://xichenpan.com/metaquery/)] \n\n[arxiv 2025.05] Nexus-Gen: A Unified Model for Image Understanding, Generation, and Editing  [[PDF](https://arxiv.org/abs/2504.21356)]\n\n[arxiv 2025.05]  YoChameleon: Personalized Vision and Language Generation [[PDF](https://arxiv.org/abs/2504.20998),[Page](https://thaoshibe.github.io/YoChameleon)] ![Code](https://img.shields.io/github/stars/WisconsinAIVision/YoChameleon?style=social&label=Star) \n\n[arxiv 2025.05] X-Fusion: Introducing New Modality to Frozen Large Language Models  [[PDF](https://arxiv.org/abs/2504.20996),[Page](https://sichengmo.github.io/XFusion/)] \n\n[arxiv 2025.05] Unified Multimodal Understanding and Generation Models: Advances, Challenges, and Opportunities  [[PDF](https://arxiv.org/pdf/2505.02567)]\n\n[arxiv 2025.05] Mogao: An Omni Foundation Model for Interleaved Multi-Modal Generation  [[PDF](https://arxiv.org/pdf/2505.05472)]\n\n[arxiv 2025.05]  TokLIP: Marry Visual Tokens to CLIP for Multimodal Comprehension and Generation [[PDF](https://arxiv.org/abs/2505.05422),[Page](https://arxiv.org/abs/2505.05422)] ![Code](https://img.shields.io/github/stars/TencentARC/TokLIP?style=social&label=Star) \n\n[arxiv 2025.05] Selftok: Discrete Visual Tokens of Autoregression, by Diffusion, and for Reasoning  [[PDF](https://arxiv.org/pdf/2505.07538),[Page](https://selftok-team.github.io/report/)] \n\n[arxiv 2025.05]  GoT-R1: Unleashing Reasoning Capability of MLLM for Visual Generation with Reinforcement Learning[[PDF](https://arxiv.org/abs/2505.17022),[Page](https://github.com/gogoduan/GoT-R1)] ![Code](https://img.shields.io/github/stars/gogoduan/GoT-R1?style=social&label=Star) \n\n[arxiv 2025.05] Delving into RL for Image Generation with CoT: A Study on DPO vs. GRPO  [[PDF](https://arxiv.org/abs/2505.17017),[Page](https://github.com/ZiyuGuo99/Image-Generation-CoT)] ![Code](https://img.shields.io/github/stars/ZiyuGuo99/Image-Generation-CoT?style=social&label=Star) \n\n[arxiv 2025.05]  UniGen: Enhanced Training & Test-Time Strategies for Unified Multimodal Understanding and Generation [[PDF](https://arxiv.org/abs/2505.14682)]\n\n[arxiv 2025.05] Co-Reinforcement Learning for Unified Multimodal Understanding and Generation  [[PDF](https://arxiv.org/abs/2505.17534)]\n\n[arxiv 2025.06]  Ming-Omni: A Unified Multimodal Model for Perception and Generation [[PDF](https://arxiv.org/abs/2506.09344),[Page](https://lucaria-academy.github.io/Ming-Omni/)] ![Code](https://img.shields.io/github/stars/inclusionAI/Ming?style=social&label=Star) \n\n[arxiv 2025.06] Pisces: An Auto-regressive Foundation Model for Image Understanding and Generation  [[PDF](https://arxiv.org/abs/2506.10395)]\n\n[arxiv 2025.06] Show-o2: Improved Native Unified Multimodal Models  [[PDF](https://arxiv.org/abs/2506.15564),[Page](https://github.com/showlab/Show-o)] ![Code](https://img.shields.io/github/stars/showlab/Show-o?style=social&label=Star) \n\n[arxiv 2025.06]  UniFork: Exploring Modality Alignment for Unified Multimodal Understanding and Generation [[PDF](https://arxiv.org/abs/2506.17202),[Page](https://github.com/tliby/UniFork)] ![Code](https://img.shields.io/github/stars/tliby/UniFork?style=social&label=Star) \n\n[arxiv 2025.06] OmniGen2: Exploration to Advanced Multimodal Generation  [[PDF](https://arxiv.org/abs/2506.18871),[Page](https://github.com/VectorSpaceLab/OmniGen2)] ![Code](https://img.shields.io/github/stars/VectorSpaceLab/OmniGen2?style=social&label=Star) \n\n[arxiv 2025.07]  MENTOR: Efficient Multimodal-Conditioned Tuning for Autoregressive Vision Generation [[PDF](https://arxiv.org/abs/2507.09574),[Page](https://haozhezhao.github.io/MENTOR.page)] ![Code](https://img.shields.io/github/stars/HaozheZhao/MENTOR?style=social&label=Star) \n\n[arxiv 2025.07]  X-Omni: Reinforcement Learning Makes Discrete Autoregressive Image Generative Models Great Again [[PDF](https://arxiv.org/abs/2507.22058),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star) \n\n[arxiv 2025.07]  UniLiP: Adapting CLIP for Unified Multimodal Understanding, Generation and Editing [[PDF](https://arxiv.org/abs/2507.23278)]\n\n[arxiv 2025.08] Skywork UniPic: Unified Autoregressive Modeling for Visual Understanding and Generation  [[PDF](https://arxiv.org/abs/2508.03320),[Page](https://github.com/SkyworkAI/UniPic)] ![Code](https://img.shields.io/github/stars/SkyworkAI/UniPic?style=social&label=Star) \n\n[arxiv 2025.09] OneCAT: Decoder-Only Auto-Regressive Model for Unified Understanding and Generation  [[PDF](https://arxiv.org/pdf/2509.03498),[Page](https://onecat-ai.github.io/)] ![Code](https://img.shields.io/github/stars/onecat-ai/onecat?style=social&label=Star) \n\n[arxiv 2025.09] Skywork UniPic 2.0: Building Kontext Model with Online RL for Unified Multimodal Model  [[PDF](https://arxiv.org/abs/2509.04548),[Page](https://unipic-v2.github.io/)] ![Code](https://img.shields.io/github/stars/SkyworkAI/UniPic/?style=social&label=Star) \n\n[arxiv 2025.09]  Can Understanding and Generation Truly Benefit Together — or Just Coexist? [[PDF](https://arxiv.org/pdf/2509.09666),[Page](https://github.com/PKU-YuanGroup/UAE)] ![Code](https://img.shields.io/github/stars/PKU-YuanGroup/UAE?style=social&label=Star) \n\n[arxiv 2025.09] Lavida-O: Elastic Large Masked Diffusion Models for Unified Multimodal Understanding and Generation  [[PDF](https://arxiv.org/pdf/2509.19244)]\n\n[arxiv 2025.09] Hyper-Bagel: A Unified Acceleration Framework for Multimodal Understanding and Generation  [[PDF](https://arxiv.org/abs/2509.18824),[Page](https://hyper-bagel.github.io/)] \n\n[arxiv 2025.09] Understanding-in-Generation: Reinforcing Generative Capability of Unified Model via Infusing Understanding into Generation  [[PDF](https://arxiv.org/abs/2509.18639),[Page](https://github.com/QC-LY/UiG)] ![Code](https://img.shields.io/github/stars/QC-LY/UiG?style=social&label=Star) \n\n[arxiv 2025.09] MANZANO: A Simple and Scalable Unified Multimodal Model with a Hybrid Vision Tokenizer  [[PDF](https://arxiv.org/abs/2509.16197)]\n\n[arxiv 2025.10]  Lumina-DiMOO: An Omni Diffusion Large Language Model for Multi-Modal Generation and Understanding [[PDF](https://arxiv.org/abs/2510.06308),[Page](https://github.com/Alpha-VLLM/Lumina-DiMOO)] ![Code](https://img.shields.io/github/stars/Alpha-VLLM/Lumina-DiMOO?style=social&label=Star) \n\n[arxiv 2025.10] LightBagel: A Light-weighted, Double Fusion Framework for Unified Multimodal Understanding and Generation  [[PDF](https://arxiv.org/abs/2510.22946),[Page](https://ucsc-vlaa.github.io/LightBagel/)] \n\n[arxiv 2025.10]  Ming-Flash-Omni: A Sparse, Unified Architecture for Multimodal Perception and Generation [[PDF](https://arxiv.org/abs/2510.24821),[Page](https://github.com/inclusionAI/Ming)] ![Code](https://img.shields.io/github/stars/inclusionAI/Ming?style=social&label=Star) \n\n[arxiv 2025.10] Emu3.5: Native Multimodal Models are World Learners  [[PDF](https://arxiv.org/abs/2510.26583),[Page](https://emu.world/)] \n\n[arxiv 2025.11]  MammothModa2: Jointly Optimized Autoregressive-Diffusion Models for Unified Multimodal Understanding and Generation [[PDF](https://arxiv.org/abs/2511.18262),[Page](https://ali-vilab.github.io/MammothModa-Page/)] ![Code](https://img.shields.io/github/stars/bytedance/mammothmoda?style=social&label=Star) \n\n[arxiv 2025.12]  Tuna: Taming Unified Visual Representations for Native Unified Multimodal Models [[PDF](https://arxiv.org/pdf/2512.02014),[Page](https://tuna-ai.org/)] ![Code](https://img.shields.io/github/stars/wren93/tuna?style=social&label=Star) \n\n[arxiv 2025.12]  Visual-Aware CoT: Achieving High-Fidelity Visual Consistency in Unified Models [[PDF](https://arxiv.org/pdf/2512.19686),[Page](https://zixuan-ye.github.io/VACoT/)] \n\n[arxiv 2026.01]  NextFlow: Unified Sequential Modeling Activates Multimodal Understanding and Generation [[PDF](https://arxiv.org/pdf/2601.02204),[Page](https://github.com/ByteVisionLab/NextFlow)] ![Code](https://img.shields.io/github/stars/ByteVisionLab/NextFlow?style=social&label=Star) \n\n[arxiv 2026.02] Kelix Technique Report: Closing the Understanding Gap of Discrete Tokens in Unified Multimodal Models  [[PDF](https://arxiv.org/pdf/2602.09843)]\n\n[arxiv 2026.03] InternVL-U: Democratizing Unified Multimodal Models for Understanding, Reasoning, Generation and Editing  [[PDF](https://arxiv.org/abs/2603.09877),[Page](https://github.com/OpenGVLab/InternVL-U)] ![Code](https://img.shields.io/github/stars/OpenGVLab/InternVL-U?style=social&label=Star) \n\n[arxiv 2026.03] HiMu: Hierarchical Multimodal Frame Selection for Long Video Question Answering  [[PDF](https://arxiv.org/abs/2603.18558)]\n\n[arxiv 2026.03] LVOmniBench: Pioneering Long Audio-Video Understanding Evaluation for Omnimodal LLMs  [[PDF](https://arxiv.org/abs/2603.19217),[Page](https://kd-tao.github.io/LVOmniBench/)]\n\n\n[arxiv 2026.03]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star) \n\n\n## Omni\n[arxiv 2025.09] Qwen3-Omni Technical Report  [[PDF](https://arxiv.org/abs/2509.17765),[Page](https://github.com/QwenLM/Qwen3-Omni)] ![Code](https://img.shields.io/github/stars/QwenLM/Qwen3-Omni?style=social&label=Star) \n\n[arxiv 2025.10] InteractiveOmni: A Unified Omni-modal Model for Audio-Visual Multi-turn Dialogue  [[PDF](https://arxiv.org/abs/2510.13747),[Page](https://github.com/SenseTime-FVG/InteractiveOmni)] ![Code](https://img.shields.io/github/stars/SenseTime-FVG/InteractiveOmni?style=social&label=Star) \n\n[arxiv 2026.03]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star) \n\n\n\n## diffusion LLM\n[arxiv 2025.05]  MMaDA: Multimodal Large Diffusion Language Models [[PDF](https://arxiv.org/abs/2505.15809),[Page](https://github.com/Gen-Verse/MMaDA)] ![Code](https://img.shields.io/github/stars/Gen-Verse/MMaDA?style=social&label=Star) \n\n[arxiv 2026.03]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star) \n\n\n## evaluation \n[arxiv 2024.10] The Curse of Multi-Modalities: Evaluating Hallucinations of Large Multimodal Models across Language, Visual, and Audio[[PDF](https://arxiv.org/abs/2410.12787)]\n\n\n## VLA \n[arxiv 2025.07] EmbRACE-3K: Embodied Reasoning and Action in Complex Environments  [[PDF](https://arxiv.org/pdf/2507.10548),[Page](https://mxllc.github.io/EmbRACE-3K/)] ![Code](https://img.shields.io/github/stars/mxllc/EmbRACE-3K?style=social&label=Star) \n\n[arxiv 2025.07] EgoVLA: Learning Vision-Language-Action Models from Egocentric Human Videos  [[PDF](https://arxiv.org/pdf/2507.12440),[Page](https://rchalyang.github.io/EgoVLA/)] \n\n[arxiv 2025.07] ThinkAct: Vision-Language-Action Reasoning via Reinforced Visual Latent Planning  [[PDF](https://arxiv.org/abs/2507.16815),[Page](https://jasper0314-huang.github.io/thinkact-vla/)] \n\n[arxiv 2025.08]  Discrete Diffusion VLA: Bringing Discrete Diffusion to Action Decoding in Vision-Language-Action Policies [[PDF](https://arxiv.org/abs/2508.20072)】\n\n[arxiv 2025.09]  WoW: Towards a World omniscient World model Through Embodied Interaction [[PDF](https://arxiv.org/abs/2509.22642),[Page](https://wow-world-model.github.io/)] ![Code](https://img.shields.io/github/stars/wow-world-model/wow-world-model?style=social&label=Star) \n\n[arxiv 2025.10] PixelVLA: Advancing Pixel-level Understanding in Vision-Language-Action Model  [[PDF](https://arxiv.org/abs/2511.01571)]\n\n[arxiv 2025.11]  Unified Diffusion VLA: Vision-Language-Action Model via Joint Discrete Denoising Diffusion Process [[PDF](https://arxiv.org/abs/2511.01718),[Page](https://irpn-eai.github.io/UD-VLA.github.io/)] ![Code](https://img.shields.io/github/stars/OpenHelix-Team/UD-VLA?style=social&label=Star) \n\n[arxiv 2025.12] ENACT: Evaluating Embodied Cognition withWorld Modeling of Egocentric Interaction  [[PDF](https://arxiv.org/abs/2511.20937),[Page](https://enact-embodied-cognition.github.io/)] ![Code](https://img.shields.io/github/stars/mll-lab-nu/ENACT?style=social&label=Star) \n\n[arxiv 2025.12]  Motus: A Unified Latent Action World Model [[PDF](https://arxiv.org/abs/2512.13030),[Page](https://motus-robotics.github.io/motus)] ![Code](https://img.shields.io/github/stars/thu-ml/Motus?style=social&label=Star) \n\n[arxiv 2026.01]  Fast-ThinkAct: Efficient Vision-Language-Action Reasoning via Verbalizable Latent Planning [[PDF](https://arxiv.org/abs/2601.09708),[Page](https://jasper0314-huang.github.io/fast-thinkact/)]\n\n[arxiv 2026.01]  VisGym: Diverse, Customizable, Scalable Environments for Multimodal Agents [[PDF](https://arxiv.org/abs/2601.16973),[Page](https://visgym.github.io/)] ![Code](https://img.shields.io/github/stars/visgym/VIsGym?style=social&label=Star) \n\n[arxiv 2026.01]  A Pragmatic VLA Foundation Model [[PDF](https://arxiv.org/abs/2601.18692),[Page](https://technology.robbyant.com/lingbot-vla)] ![Code](https://img.shields.io/github/stars/Robbyant/lingbot-vla?style=social&label=Star) \n\n[arxiv 2026.01]  MAIN-VLA: Modeling Abstraction of Intention and eNvironment for Vision-Language-Action Models [[PDF](https://arxiv.org/abs/2602.02212),[Page](https://main-vla.github.io/)] \n\n[arxiv 2026.02] DreamZero: World Action Models are Zero-shot Policies  [[PDF](https://dreamzero0.github.io/DreamZero.pdf),[Page](https://dreamzero0.github.io/)] ![Code](https://img.shields.io/github/stars/dreamzero0/dreamzero?style=social&label=Star) \n\n[arxiv 2026.03] Look Before Acting: Enhancing Vision Foundation Representations for Vision-Language-Action Models  [[PDF](https://arxiv.org/abs/2603.15618),[Page](https://deepvision-vla.github.io/)]\n\n[arxiv 2026.03] RealVLG-R1: A Large-Scale Real-World Visual-Language Grounding Benchmark for Robotic Perception and Manipulation  [[PDF](https://arxiv.org/abs/2603.14880),[Page](https://github.com/lif314/RealVLG-R1)] ![Code](https://img.shields.io/github/stars/lif314/RealVLG-R1?style=social&label=Star)\n\n[arxiv 2026.03] VLA-Thinker: Boosting Vision-Language-Action Models through Thinking-with-Image Reasoning  [[PDF](https://arxiv.org/abs/2603.14523),[Page](https://cywang735.github.io/VLA-Thinker/)] ![Code](https://img.shields.io/github/stars/CYWang735/VLAThinker?style=social&label=Star)\n\n[arxiv 2026.03] Beyond Dense Futures: World Models as Structured Planners for Robotic Manipulation  [[PDF](https://arxiv.org/abs/2603.12553),[Page](https://wm-planner.github.io/structvla/)] ![Code](https://img.shields.io/github/stars/wm-planner/structvla?style=social&label=Star) \n\n[arxiv 2026.03] GigaWorld-Policy: An Efficient Action-Centered World-Action Model  [[PDF](https://arxiv.org/abs/2603.17240)]\n\n[arxiv 2026.03] Fast-dVLA: Accelerating Discrete Diffusion VLA to Real-Time Performance  [[PDF](https://arxiv.org/abs/2603.25661),[Page](https://chris1220313648.github.io/Fast-dVLA/)]\n\n[arxiv 2026.03] LaMP: Learning Vision-Language-Action Policies with 3D Scene Flow as Latent Motion Prior  [[PDF](https://arxiv.org/abs/2603.25399),[Page](https://summerwxk.github.io/lamp-project-page/)]\n\n[arxiv 2026.03] Beyond Attention Magnitude: Leveraging Inter-layer Rank Consistency for Efficient Vision-Language-Action Models  [[PDF](https://arxiv.org/abs/2603.24941)]\n\n\n[arxiv 2026.03]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star) \n\n\n## reasoning\n\n[arxiv 2025.03] Video-R1: Reinforcing Video Reasoning in MLLMs  [[PDF](https://arxiv.org/abs/2503.21776),[Page](https://github.com/tulerfeng/Video-R1)] ![Code](https://img.shields.io/github/stars/tulerfeng/Video-R1?style=social&label=Star) \n\n[arxiv 2025.04] Skywork R1V: Pioneering Multimodal Reasoning with Chain-of-Thought  [[PDF](https://arxiv.org/abs/2504.05599)]\n\n[arxiv 2025.04]  VLM-R1: A Stable and Generalizable R1-style Large Vision-Language Model [[PDF](https://arxiv.org/abs/2504.07615),[Page](https://github.com/om-ai-lab/VLM-R1)] ![Code](https://img.shields.io/github/stars/om-ai-lab/VLM-R1?style=social&label=Star) \n\n[arxiv 2025.04] VideoChat-R1: Enhancing Spatio-Temporal Perception via Reinforcement Fine-Tuning  [[PDF](https://arxiv.org/abs/2504.06958),[Page](https://github.com/OpenGVLab/VideoChat-R1)] ![Code](https://img.shields.io/github/stars/OpenGVLab/VideoChat-R1?style=social&label=Star) \n\n[arxiv 2025.04] Relation-R1: Cognitive Chain-of-Thought Guided Reinforcement Learning for Unified Relational Comprehension  [[PDF](https://arxiv.org/pdf/2504.14642),[Page](https://github.com/HKUST-LongGroup/Relation-R1)] ![Code](https://img.shields.io/github/stars/HKUST-LongGroup/Relation-R1?style=social&label=Star) \n\n[arxiv 2025.04]  Unsupervised Visual Chain-of-Thought Reasoning via Preference Optimization [[PDF](https://arxiv.org/pdf/2504.18397),[Page](https://kesenzhao.github.io/my_project/projects/UV-CoT.html)] ![Code](https://img.shields.io/github/stars/kesenzhao/UV-CoT?style=social&label=Star) \n\n[arxiv 2025.05]  R1-Reward: Training Multimodal Reward Model Through Stable Reinforcement Learning [[PDF](https://arxiv.org/abs/2505.02835),[Page](https://arxiv.org/abs/2505.02835)] ![Code](https://img.shields.io/github/stars/yfzhang114/r1_reward?style=social&label=Star) \n\n[arxiv 2025.05]  EchoInk-R1: Exploring Audio-Visual Reasoning in Multimodal LLMs via Reinforcement Learning [[PDF](https://arxiv.org/abs/2505.04623),[Page](https://github.com/HarryHsing/EchoInk)] ![Code](https://img.shields.io/github/stars/HarryHsing/EchoInk?style=social&label=Star) \n\n[arxiv 2025.05] SophiaVL-R1: Reinforcing MLLMs Reasoning with Thinking Reward  [[PDF](https://arxiv.org/abs/2505.17018),[Page](https://github.com/kxfan2002/SophiaVL-R1)] ![Code](https://img.shields.io/github/stars/kxfan2002/SophiaVL-R1?style=social&label=Star) \n\n[arxiv 2025.05] R1-ShareVL: Incentivizing Reasoning Capability of Multimodal Large Language Models via Share-GRPO  [[PDF](https://arxiv.org/abs/2505.16673),[Page](https://github.com/HJYao00/R1-ShareVL)] ![Code](https://img.shields.io/github/stars/HJYao00/R1-ShareVL?style=social&label=Star) \n\n[arxiv 2025.05]  STAR-R1: Spacial TrAnsformation Reasoning by Reinforcing Multimodal LLMs [[PDF](https://arxiv.org/abs/2505.15804),[Page](https://github.com/zongzhao23/STAR-R1)] ![Code](https://img.shields.io/github/stars/zongzhao23/STAR-R1?style=social&label=Star) \n\n[arxiv 2025.05]  Visual Thoughts: A Unified Perspective of Understanding Multimodal Chain-of-Thought [[PDF](https://arxiv.org/abs/2505.15510)]\n\n[arxiv 2025.05] Chain-of-Focus: Adaptive Visual Search and Zooming for Multimodal Reasoning via RL  [[PDF](https://arxiv.org/abs/2505.15436)]\n\n[arxiv 2025.05]  Visionary-R1: Mitigating Shortcuts in Visual Reasoning with Reinforcement Learning [[PDF](https://arxiv.org/abs/2505.14677),[Page](https://github.com/maifoundations/Visionary-R1)] ![Code](https://img.shields.io/github/stars/maifoundations/Visionary-R1?style=social&label=Star) \n\n[arxiv 2025.05]  VisionReasoner: Unified Visual Perception and Reasoning via Reinforcement Learning [[PDF](https://arxiv.org/abs/2505.12081),[Page](https://github.com/dvlab-research/VisionReasoner)] ![Code](https://img.shields.io/github/stars/dvlab-research/VisionReasoner?style=social&label=Star) \n\n[arxiv 2025.05]  Patho-R1: A Multimodal Reinforcement Learning-Based Pathology Expert Reasoner [[PDF](https://arxiv.org/abs/2505.11404),[Page](https://github.com/Wenchuan-Zhang/Patho-R1)] ![Code](https://img.shields.io/github/stars/Wenchuan-Zhang/Patho-R1?style=social&label=Star) \n\n[arxiv 2025.06]  Agent-X: Evaluating Deep Multimodal Reasoning in Vision-Centric Agentic Tasks [[PDF](https://arxiv.org/abs/2505.24876),[Page](https://github.com/mbzuai-oryx/Agent-X)] ![Code](https://img.shields.io/github/stars/mbzuai-oryx/Agent-X?style=social&label=Star) \n\n[arxiv 2025.06]  SiLVR: A Simple Language-based Video Reasoning Framework [[PDF](https://arxiv.org/abs/2505.24869),[Page](https://sites.google.com/cs.unc.edu/silvr)] ![Code](https://img.shields.io/github/stars/CeeZh/SILVR?style=social&label=Star) \n\n[arxiv 2025.06] Video-Skill-CoT: Skill-based Chain-of-Thoughts for Domain-Adaptive Video Reasoning  [[PDF](https://arxiv.org/abs/2506.03525),[Page](https://video-skill-cot.github.io/)] ![Code](https://img.shields.io/github/stars/daeunni/Video-Skill-CoT?style=social&label=Star) \n\n[arxiv 2025.06]  MINT-CoT: Enabling Interleaved Visual Tokens in Mathematical Chain-of-Thought Reasoning [[PDF](https://arxiv.org/abs/2506.05331),[Page](https://github.com/xinyan-cxy/MINT-CoT)] ![Code](https://img.shields.io/github/stars/xinyan-cxy/MINT-CoT?style=social&label=Star) \n\n[arxiv 2025.06] VideoChat-A1: Thinking with Long Videos by Chain-of-Shot Reasoning  [[PDF](https://arxiv.org/abs/2506.06097)]\n\n[arxiv 2025.06] Play to Generalize: Learning to Reason Through Game Play  [[PDF](https://arxiv.org/abs/2506.08011),[Page](https://github.com/yunfeixie233/ViGaL)] ![Code](https://img.shields.io/github/stars/yunfeixie233/ViGaL?style=social&label=Star) \n\n[arxiv 2025.06]  DeepVideo-R1: Video Reinforcement Fine-Tuning via Difficulty-aware Regressive GRPO [[PDF](https://arxiv.org/abs/2506.07464)]\n\n[arxiv 2025.06] MMSearch-R1: Incentivizing LMMs to Search  [[PDF](https://arxiv.org/abs/2506.20670),[Page](https://github.com/EvolvingLMMs-Lab/multimodal-search-r1)] ![Code](https://img.shields.io/github/stars/EvolvingLMMs-Lab/multimodal-search-r1?style=social&label=Star) \n\n[arxiv 2025.06]  MiCo: Multi-image Contrast for Reinforcement Visual Reasoning [[PDF](https://arxiv.org/pdf/2506.22434)]\n\n[arxiv 2025.07]  GLM-4.1V-Thinking: Towards Versatile Multimodal Reasoning with Scalable Reinforcement Learning [[PDF](https://arxiv.org/abs/2507.01006),[Page](https://github.com/THUDM/GLM-4.1V-Thinking)] ![Code](https://img.shields.io/github/stars/THUDM/GLM-4.1V-Thinking?style=social&label=Star) \n\n[arxiv 2025.07] Skywork-R1V3 Technical Report  [[PDF](https://arxiv.org/abs/2507.06167),[Page](https://github.com/SkyworkAI/Skywork-R1V)] ![Code](https://img.shields.io/github/stars/SkyworkAI/Skywork-R1V?style=social&label=Star) \n\n[arxiv 2025.08]  Thyme: Think Beyond Images [[PDF](https://arxiv.org/abs/2508.11630),[Page](https://thyme-vl.github.io/)] ![Code](https://img.shields.io/github/stars/yfzhang114/Thyme?style=social&label=Star) \n\n[arxiv 2025.09] Mini-o3: Scaling Up Reasoning Patterns and Interaction Turns for Visual Search  [[PDF](https://arxiv.org/pdf/2509.07969),[Page](https://mini-o3.github.io/)] ![Code](https://img.shields.io/github/stars/Mini-o3/Mini-o3?style=social&label=Star) \n\n[arxiv 2025.10] DeepMMSearch-R1: Empowering Multimodal LLMs in Multimodal Web Search  [[PDF](https://arxiv.org/abs/2510.12801)]\n\n[arxiv 2025.10] MathCanvas: Intrinsic Visual Chain-of-Thought for Multimodal Mathematical Reasoning  [[PDF](https://arxiv.org/abs/2510.14958),[Page](https://mathcanvas.github.io/)] ![Code](https://img.shields.io/github/stars/shiwk24/MathCanvas?style=social&label=Star) \n\n[arxiv 2025.10]  ThinkMorph: Emergent Properties in Multimodal Interleaved Chain-of-Thought Reasoning [[PDF](https://arxiv.org/abs/2510.27492),[Page](https://thinkmorph.github.io/)] ![Code](https://img.shields.io/github/stars/ThinkMorph/ThinkMorph?style=social&label=Star) \n\n[arxiv 2025.12] Video-R2: Reinforcing Consistent and Grounded Reasoning in Multimodal Language Models  [[PDF](https://arxiv.org/abs/2511.23478),[Page](https://github.com/mbzuai-oryx/Video-R2)] ![Code](https://img.shields.io/github/stars/mbzuai-oryx/Video-R2?style=social&label=Star) \n\n[arxiv 2025.12] Video-CoM: Interactive Video Reasoning via Chain of Manipulations  [[PDF](https://arxiv.org/pdf/2511.23477),[Page](https://github.com/mbzuai-oryx/Video-CoM)] ![Code](https://img.shields.io/github/stars/mbzuai-oryx/Video-CoM?style=social&label=Star) \n\n[arxiv 2026.03]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star) \n\n\n\n## Compression\n[arxiv 2025.02]  AdaSVD: Adaptive Singular Value Decomposition for Large Language Models [[PDF](https://arxiv.org/abs/2502.01403),[Page](https://github.com/ZHITENGLI/AdaSVD)] ![Code](https://img.shields.io/github/stars/ZHITENGLI/AdaSVD?style=social&label=Star) \n\n[arxiv 2025.02]  Vision-centric Token Compression in Large Language Model [[PDF](https://arxiv.org/pdf/2502.00791)]\n\n[arxiv 2025.02] From 16-Bit to 1-Bit: Visual KV Cache Quantization for Memory-Efficient Multimodal Large Language Models  [[PDF](https://arxiv.org/abs/2502.14882)]\n\n[arxiv 2025.03]  Token-Efficient Long Video Understanding for Multimodal LLMs [[PDF](https://arxiv.org/pdf/2503.04130)]\n\n[arxiv 2026.03]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star) \n\n## few-shot\n[arxiv 2025.02]  Efficient Few-Shot Continual Learning in Vision-Language Models [[PDF](https://arxiv.org/pdf/2502.04098)]\n\n\n## tokenzier\n[arxiv 2025.04] UniViTAR: Unified Vision Transformer with Native Resolution  [[PDF](https://arxiv.org/abs/2504.01792)]\n\n[arxiv 2025.04]  UniToken: Harmonizing Multimodal Understanding and Generation through Unified Visual Encoding [[PDF](https://arxiv.org/abs/2504.04423),[Page](https://github.com/SxJyJay/UniToken)] ![Code](https://img.shields.io/github/stars/SxJyJay/UniToken?style=social&label=Star) \n\n[arxiv 2025.05]  OpenVision: A Fully-Open, Cost-Effective Family of Advanced Vision Encoders for Multimodal Learning [[PDF](https://arxiv.org/abs/2505.04601),[Page](https://ucsc-vlaa.github.io/OpenVision/)] ![Code](https://img.shields.io/github/stars/UCSC-VLAA/OpenVision?style=social&label=Star) \n\n[arxiv 2025.05] ALTo: Adaptive-Length Tokenizer for Autoregressive Mask Generation  [[PDF](https://arxiv.org/pdf/2505.16495),[Page](https://github.com/yayafengzi/ALToLLM)] ![Code](https://img.shields.io/github/stars/yayafengzi/ALToLLM?style=social&label=Star) \n\n[arxiv 2025.05] UniCTokens: Boosting Personalized Understanding and Generation via Unified Concept Tokens  [[PDF](https://arxiv.org/abs/2505.14671),[Page](https://github.com/arctanxarc/UniCTokens)] ![Code](https://img.shields.io/github/stars/arctanxarc/UniCTokens?style=social&label=Star) \n\n[arxiv 2025.05] Slot-MLLM: Object-Centric Visual Tokenization for Multimodal LLM  [[PDF](https://arxiv.org/abs/2505.17726),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star) \n\n[arxiv 2025.06] UniWorld-V1: High-Resolution Semantic Encoders for Unified Visual Understanding and Generation  [[PDF](https://arxiv.org/abs/2506.03147),[Page](https://github.com/PKU-YuanGroup/UniWorld-V1)] ![Code](https://img.shields.io/github/stars/PKU-YuanGroup/UniWorld-V1?style=social&label=Star) \n\n[arxiv 2025.07]  When Tokens Talk Too Much: A Survey of Multimodal Long-Context Token Compression across Images, Videos, and Audios [[PDF](https://arxiv.org/abs/2507.20198),[Page](https://github.com/cokeshao/Awesome-Multimodal-Token-Compression)] ![Code](https://img.shields.io/github/stars/cokeshao/Awesome-Multimodal-Token-Compression?style=social&label=Star) \n\n[arxiv 2025.09]  Visual Representation Alignment for Multimodal Large Language Models [[PDF](https://arxiv.org/abs/2509.07979),[Page](https://cvlab-kaist.github.io/VIRAL/)] ![Code](https://img.shields.io/github/stars/cvlab-kaist/VIRAL?style=social&label=Star) \n\n[arxiv 2025.10] UniFlow: A Unified Pixel Flow Tokenizer for Visual Understanding and Generation  [[PDF](https://arxiv.org/abs/2510.10575),[Page](https://github.com/ZhengrongYue/UniFlow)] ![Code](https://img.shields.io/github/stars/ZhengrongYue/UniFlow?style=social&label=Star) \n\n[arxiv 2025.10]  ViCO: A Training Strategy towards Semantic Aware Dynamic High-Resolution [[PDF](https://arxiv.org/abs/2510.12793),[Page](https://huggingface.co/collections/OpenGVLab/internvl35-flash-68d7bea4d20fb9f70f145ff8)] \n\n[arxiv 2025.11]  Chain-of-Visual-Thought: Teaching VLMs to See and Think Better with Continuous Visual Tokens [[PDF](https://arxiv.org/abs/2511.19418),[Page](https://wakalsprojectpage.github.io/comt-website/)] ![Code](https://img.shields.io/github/stars/Wakals/CoVT?style=social&label=Star) \n\n[arxiv 2026.02]  UniWeTok: An Unified Binary Tokenizer with Codebook Size 2 up 128 for Unified Multimodal Large Language Model [[PDF](https://arxiv.org/abs/2602.14178)]\n\n[arxiv 2026.03] EvoTok: A Unified Image Tokenizer via Residual Latent Evolution for Visual Understanding and Generation  [[PDF](https://arxiv.org/abs/2603.12108)]\n\n[arxiv 2026.03]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star) \n\n## RoPE\n[arxiv 2025.05] Circle-RoPE: Cone-like Decoupled Rotary Positional Embedding for Large Vision-Language Models  [[PDF](https://arxiv.org/abs/2505.16416),[Page](https://github.com/lose4578/CircleRoPE)] ![Code](https://img.shields.io/github/stars/lose4578/CircleRoPE?style=social&label=Star) \n\n[arxiv 2026.03]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star) \n\n\n## audio \n[arxiv 2024.10] MuVi: Video-to-Music Generation with Semantic Alignment and Rhythmic Synchronization[[PDF](https://arxiv.org/abs/2410.12957)]\n\n[arxiv 2024.11]Video-Guided Foley Sound Generation with Multimodal Controls [[PDF](https://arxiv.org/abs/2411.17698), [Page](https://ificl.github.io/MultiFoley/)]\n\n\n[arxiv 2026.03]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star) \n\n\n## Study \n[arxiv 2025.03] Should VLMs be Pre-trained with Image Data?  [[PDF](https://arxiv.org/abs/2503.07603)]\n\n[arxiv 2026.03]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star) \n\n\n## agent\n[arxiv 2025.08]  OPENCUA: Open Foundations for Computer-Use Agents [[PDF](https://arxiv.org/abs/2508.09123),[Page](https://opencua.xlang.ai/)] ![Code](https://img.shields.io/github/stars/xlang-ai/OpenCUA?style=social&label=Star) \n\n[arxiv 2025.09]  Tool-R1: Sample-Efficient Reinforcement Learning for Agentic Tool Use [[PDF](https://arxiv.org/pdf/2509.12867),[Page](https://github.com/YBYBZhang/Tool-R1)] \n\n[arxiv 2025.12] Step-GUI Technical Report  [[PDF](https://arxiv.org/abs/2512.15431),[Page](https://github.com/stepfun-ai/gelab-zero)] ![Code](https://img.shields.io/github/stars/stepfun-ai/gelab-zero?style=social&label=Star) \n\n[arxiv 2026.03]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star) \n\n## memory\n[arxiv 2025.08]  Seeing, Listening, Remembering, and Reasoning: A Multimodal Agent with Long-Term Memory [[PDF](https://arxiv.org/abs/2508.09736),[Page](https://github.com/bytedance-seed/m3-agent)] ![Code](https://img.shields.io/github/stars/bytedance-seed/m3-agent?style=social&label=Star) \n\n[arxiv 2026.03]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star) \n\n## context\n[arxiv 2025.10] Glyph: Scaling Context Windows via Visual-Text Compression  [[PDF](https://arxiv.org/pdf/2510.17800),[Page](https://github.com/thu-coai/Glyph)] ![Code](https://img.shields.io/github/stars/thu-coai/Glyph?style=social&label=Star) \n\n[arxiv 2025.10]  DeepSeek-OCR: Contexts Optical Compression [[PDF](https://arxiv.org/abs/2510.18234),[Page](https://github.com/deepseek-ai/DeepSeek-OCR)] ![Code](https://img.shields.io/github/stars/deepseek-ai/DeepSeek-OCR?style=social&label=Star) \n\n\n[arxiv 2026.03]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star) \n\n\n## speed \n[arxiv 2024.10]PyramidDrop: Accelerating Your Large Vision-Language Models via Pyramid Visual Redundancy Reduction[[PDF](https://arxiv.org/abs/2410.17247), [Page]()]\n\n[arxiv 2024.12]  [CLS] Attention is All You Need for Training-Free Visual Token Pruning: Make VLM Inference Faster\n [[PDF](https://arxiv.org/abs/2412.01818),[Page](https://theia-4869.github.io/FasterVLM)] ![Code](https://img.shields.io/github/stars/Theia-4869/FasterVLM?style=social&label=Star) \n\n\n[arxiv 2025.03]  Dynamic Pyramid Network for Efficient Multimodal Large Language Model [[PDF](https://arxiv.org/abs/2503.20322)]\n\n[arxiv 2025.03]  Mobile-VideoGPT Fast and Accurate Video Understanding Language Model [[PDF](https://arxiv.org/pdf/2503.21782),[Page](https://amshaker.github.io/Mobile-VideoGPT/)] ![Code](https://img.shields.io/github/stars/Amshaker/Mobile-VideoGPT?style=social&label=Star) \n\n[arxiv 2025.03]  InternVL-X: Advancing and Accelerating InternVL Series with Efficient Visual Token Compression [[PDF](https://arxiv.org/abs/2503.21307),[Page](https://github.com/ludc506/InternVL-X)] ![Code](https://img.shields.io/github/stars/ludc506/InternVL-X?style=social&label=Star) \n\n[arxiv 2025.04]  Memory-efficient Streaming VideoLLMs for Real-time Procedural Video Understanding [[PDF](https://arxiv.org/abs/2504.13915),[Page](https://dibschat.github.io/ProVideLLM/)] \n\n[arxiv 2025.08] MMTok: Multimodal Coverage Maximization for Efficient Inference of VLMs  [[PDF](https://arxiv.org/abs/2508.18264),[Page](https://cv.ironieser.cc/projects/mmtok.html)] ![Code](https://img.shields.io/github/stars/Ironieser/MMTok/?style=social&label=Star) \n\n[arxiv 2025.10] AndesVL Technical Report: An Efficient Mobile-side Multimodal Large Language Model  [[PDF](https://arxiv.org/abs/2510.11496),[Page](https://github.com/OPPO-Mente-Lab/AndesVL_Evaluation)] ![Code](https://img.shields.io/github/stars/OPPO-Mente-Lab/AndesVL_Evaluation?style=social&label=Star) \n\n\n[arxiv 2026.03]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star) \n\n## Evaluation\n\n[arxiv 2025.04]  MME-Unify: A Comprehensive Benchmark for Unified Multimodal Understanding and Generation Models [[PDF](https://arxiv.org/abs/2504.03641),[Page](https://mme-unify.github.io/)] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star) \n\n[arxiv 2025.05] On Path to Multimodal Generalist: General-Level and General-Bench  [[PDF](https://arxiv.org/abs/2505.04620),[Page](https://generalist.top/)] ![Code](https://img.shields.io/github/stars/path2generalist/General-Level?style=social&label=Star) \n\n[arxiv 2025.05] SpatialScore: Towards Unified Evaluation for Multimodal Spatial Understanding  [[PDF](https://arxiv.org/abs/2505.17012),[Page](https://haoningwu3639.github.io/SpatialScore)] ![Code](https://img.shields.io/github/stars/haoningwu3639/SpatialScore/?style=social&label=Star) \n\n[arxiv 2025.05]  LENS: Multi-level Evaluation of Multimodal Reasoning with Large Language Models [[PDF](https://arxiv.org/abs/2505.15616),[Page](https://lens4mllms.github.io/mars2-workshop-iccv2025/)] \n\n[arxiv 2025.05] HumaniBench: A Human-Centric Framework for Large Multimodal Models Evaluation  [[PDF](https://arxiv.org/abs/2505.11454)]\n\n[arxiv 2025.05] Human-Aligned Bench: Fine-Grained Assessment of Reasoning Ability in MLLMs vs. Humans  [[PDF](https://arxiv.org/abs/2505.11141),[Page](https://yansheng-qiu.github.io/human-aligned-bench.github.io/)] ![Code](https://img.shields.io/github/stars/yansheng-qiu/Human_Aligned_Bench?style=social&label=Star) \n\n[arxiv 2025.10] Generative Universal Verifier as Multimodal Meta-Reasoner  [[PDF](https://arxiv.org/abs/2510.13804),[Page](https://omniverifier.github.io/)] ![Code](https://img.shields.io/github/stars/Cominclip/OmniVerifier?style=social&label=Star) \n\n[arxiv 2026.03]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star) \n\n\n"
  },
  {
    "path": "README.md",
    "content": "# Video Generation\n\n\n\n\nA reading list of Image/Video Generation, Multi-Modality Understanding and Generation, and Virtual Human. \n\n* [Video Generation](https://github.com/yzhang2016/video-generation-survey/blob/main/video-generation.md)\n\n* [Editing in Diffusion](https://github.com/yzhang2016/video-generation-survey/blob/main/Editing-in-Diffusion.md)\n\n* [Multi-Modality Understanding and Generation](https://github.com/yzhang2016/video-generation-survey/blob/main/Multi-modality%20Generation.md)\n\n* [Virtual Human](https://github.com/yzhang2016/video-generation-survey/blob/main/virtual_human.md)\n\n\n## News \n- Here is a tool for visualizing those literatures: [Paper Visualization](https://auto202603.github.io/video-generation-survey-vis/)\n  \n<img width=\"1348\" height=\"667\" alt=\"image\" src=\"https://github.com/user-attachments/assets/c396be23-c8bc-422c-897e-b3e62bc52b56\" />\n"
  },
  {
    "path": "Text-to-Image.md",
    "content": "# Text-to-image Generation \n\n## AIGC Datasets \n * **CommonCanvas** CommonCanvas: An Open Diffusion Model Trained with Creative-Commons Images [[PDF](https://arxiv.org/abs/2310.16825)]\n\n    Feature: 70 millions of high-quality images with high-quality synthetic captions\n\n* **JDB** JourneyDB: A Benchmark for Generative Image Understanding [[PDF](https://arxiv.org/abs/2307.00716), [Page](https://journeydb.github.io/)]\n\n    Feature: 4 millions of Midjourney images \n\n* **DiffusionDB** DiffusionDB: A Large-scale Prompt Gallery Dataset for Text-to-Image Generative Models [[PDF](https://arxiv.org/abs/2210.14896), [Page](https://github.com/poloclub/diffusiondb)]\n\n    Feature: 14 millions of Stable Diffusion images \n  \n\n\n\n## Diffusion-based \n\n*[ICML 2021; OpenAI ] **---GLIDE---** GLIDE: Towards Photorealistic Image Generation and Editing with Text-Guided Diffusion Models \\[[PDF](https://arxiv.org/pdf/2112.10741.pdf), [Code](https://github.com/openai/glide-text2im)\\]\n\n[arxiv 2022; Microsoft] Vector Quantized Diffusion Model for Text-to-Image Synthesis \\[[PDF](https://arxiv.org/pdf/2111.14822.pdf), [Code](https://github.com/cientgu/VQ-Diffusion)\\]\n\n[CVPR 2022; SUNY] Towards Language-Free Training for Text-to-Image Generation \\[[PDF](https://openaccess.thecvf.com/content/CVPR2022/papers/Zhou_Towards_Language-Free_Training_for_Text-to-Image_Generation_CVPR_2022_paper.pdf), [Code](https://github.com/drboog/Lafite)\\]\n\n[ECCV 2022; UIUC ] Compositional Visual Generation with Composable Diffusion Models \\[[PDF](https://arxiv.org/pdf/2206.01714.pdf), [Code](https://github.com/energy-based-model/Compositional-Visual-Generation-with-Composable-Diffusion-Models-PyTorch)\\]\n\n[arxiv 2022; ByteDance] CLIP-GEN: Language-Free Training of a Text-to-Image Generator with CLIP \\[[PDF](https://arxiv.org/pdf/2203.00386.pdf), [Code](https://github.com/HFAiLab/clip-gen)\\]\n\n*[arxiv 2022; OpenAI ]  **---DALL-E2---** Hierarchical Text-Conditional Image Generation with CLIP Latents \\[[PDF](https://arxiv.org/pdf/2204.06125.pdf), Code\\]\n\n*[CVPR 2022] **---LDM---** High-Resolution Image Synthesis with Latent Diffusion Models \\[[PDF](https://arxiv.org/pdf/2112.10752.pdf), [Code](https://github.com/CompVis/latent-diffusion)\\]\n\n*[arxiv 2022; Goole] **---Imagen---**  Photorealistic Text-to-Image Diffusion Models with Deep Language Understanding \\[[PDF](https://arxiv.org/pdf/2205.11487.pdf), Code\\]\n\n[arxiv 2023.01] Simple diffusion: End-to-end diffusion for high resolution images [[PDF](https://arxiv.org/pdf/2301.11093.pdf) ]\n\n[arxiv 2023.07]SDXL: Improving Latent Diffusion Models for High-Resolution Image Synthesi [[PDF](https://arxiv.org/abs/2307.01952), [Page](https://github.com/Stability-AI/generative-models/tree/main)]\n\n[arxiv 2023.09]Emu: Enhancing Image Generation Models Using Photogenic Needles in a Haystack [[PDF](https://arxiv.org/abs/2309.15807),[Page](https://ai.meta.com/research/publications/emu-enhancing-image-generation-models-using-photogenic-needles-in-a-haystack/)]\n\n[tech report] DALLE-3: Improving Image Generation with Better Captions [[PDF](https://cdn.openai.com/papers/dall-e-3.pdf),[Page](https://openai.com/dall-e-3)]\n\n[arxiv 2023.10]Matryoshka Diffusion Models [[PDF](https://arxiv.org/abs/2310.15111)]\n\n[arxiv 2023.12]Kandinsky-3: Text-to-image diffusion model [[PDF](https://arxiv.org/abs/2312.03511),[Page](https://github.com/ai-forever/Kandinsky-3)]\n\n[arxiv 2024.01]Taiyi-Diffusion-XL: Advancing Bilingual Text-to-Image Generation with Large Vision-Language Model Support [[PDF](https://arxiv.org/abs/2401.14688)]\n\n[arxiv 2024.02]Playground v2.5: Three Insights towards Enhancing Aesthetic Quality in Text-to-Image Generation [[PDF](https://arxiv.org/abs/2402.17245),[Page](https://huggingface.co/playgroundai/playground-v2.5-1024px-aesthetic)]\n\n[arxiv 2024.03]SD3: Scaling Rectified Flow Transformers for High-Resolution Image Synthesis [[PDF](https://arxiv.org/abs/2403.03206)]\n\n[arxiv 2024.03]PixArt-Σ: Weak-to-Strong Training of Diffusion Transformer for 4K Text-to-Image Generation [[PDF](https://arxiv.org/abs/2403.04692),[Page](https://pixart-alpha.github.io/PixArt-sigma-project/)]\n\n[arxiv 2024.03]CogView3: Finer and Faster Text-to-Image Generation via Relay Diffusion [[PDF](https://arxiv.org/abs/2403.05121)]\n\n[arxiv 2024.03]Multistep Consistency Models [[PDF](https://arxiv.org/abs/2403.06807)]\n\n[arxiv 2024.04]CosmicMan: A Text-to-Image Foundation Model for Humans [[PDF](https://arxiv.org/abs/2404.01294),[Page](https://cosmicman-cvpr2024.github.io/)]\n\n[arxiv 2024.05] Hunyuan-DiT : A Powerful Multi-Resolution Diffusion Transformer with Fine-Grained Chinese Understanding [[PDF](https://tencent.github.io/HunyuanDiT/asset/Hunyuan_DiT_Tech_Report_05140553.pdf),[Page](https://github.com/Tencent/HunyuanDiT?tab=readme-ov-file)]\n\n[arxiv 2024.05] Improving the Training of Rectified Flows[[PDF](https://arxiv.org/abs/2405.20320),[Page](https://github.com/sangyun884/rfpp)]\n\n[arxiv 2024.06]Lumina-Next: Making Lumina-T2X Stronger and Faster with Next-DiT [[PDF](https://arxiv.org/abs/2406.18583),[Page](https://github.com/Alpha-VLLM/Lumina-T2X)]\n\n[arxiv 2024.7]Kolors: Effective Training of Diffusion Model for Photorealistic Text-to-Image Synthesis [[PDF](https://github.com/Kwai-Kolors/Kolors/blob/master/imgs/Kolors_paper.pdf),[Page](https://github.com/Kwai-Kolors/Kolors/tree/master)]\n\n\n[arxiv 2024.08] Imagen 3[[PDF](https://arxiv.org/abs/2408.07009)]\n\n[arxiv 2024.10] Efficient High-Resolution Image Synthesis with Linear Diffusion Transformer  [[PDF](https://arxiv.org/abs/2410.10629),[Page](https://nvlabs.github.io/Sana/)]\n\n[arxiv 2025.02] SANA 1.5: Efficient Scaling of Training-Time and Inference-Time Compute in Linear Diffusion Transformer  [[PDF](https://arxiv.org/abs/2501.18427),[Page](https://github.com/NVlabs/Sana)] ![Code](https://img.shields.io/github/stars/NVlabs/Sana?style=social&label=Star)\n\n[arxiv 2025.02]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)\n\n\n\n\n## GAN/VAE/Transformer-based \n\n[ICML 2021; OpenAI ] Zero-Shot Text-to-Image Generation \\[[PDF](https://arxiv.org/pdf/2102.12092.pdf), [Code 3](https://github.com/YoadTew/zero-shot-image-to-text)\\]\n\n[CVPR 2021; Google ] Cross-Modal Contrastive Learning for Text-to-Image Generation \\[[PDF](https://arxiv.org/pdf/2101.04702.pdf), [Code](https://github.com/google-research/xmcgan_image_generation)\\]\n\n[KDD, 2021; Alibaba ] **---M6---**  M6 : A Chinese Multimodal Pretrainer \\[[PDF](https://arxiv.org/pdf/2103.00823.pdf), Code\\]\n\n[arxiv 2021; Baidu] **---ERNIE-ViLG---** ERNIE-ViLG: Unified Generative Pre-training for Bidirectional Vision-Language Generation \\[[PDF](https://arxiv.org/pdf/2112.15283.pdf), Code\\]\n\n[ECCV 2022] **---DT2I---** DT2I: Dense Text-to-Image Generation from Region Descriptions \\[[PDF](https://arxiv.org/pdf/2204.02035.pdf), Code\\]\n\n*[arxiv 2022; Meta ] **---Make-a-scene---** Make-A-Scene: Scene-Based Text-to-Image Generation with Human Priors \\[[PDF](https://arxiv.org/pdf/2203.13131.pdf), [Code 3](https://github.com/CasualGANPapers/Make-A-Scene)\\]\n\n*[arxiv 2022; Google] **---Parti---** Scaling Autoregressive Models for Content-Rich Text-to-Image Generation \\[[PDF](https://arxiv.org/pdf/2206.10789.pdf), Code\\]\n\n[arxiv 2022; Tsinghua ] **---CogView2---** CogView2: Faster and Better Text-to-Image Generation via Hierarchical Transformers \\[[PDF](https://arxiv.org/pdf/2204.14217.pdf), [Code](https://github.com/THUDM/CogView2)\\]\n\n[ECCV 2022; Microsoft] **---NÜWA--** NÜWA: Visual Synthesis Pre-training for Neural visUal World creAtion \\[[PDF](https://arxiv.org/pdf/2111.12417.pdf), code \\]\n\n[NIPS 2022; Microsoft] **---NÜWA-Infinity--** NUWA-Infinity: Autoregressive over Autoregressive Generation for Infinite Visual Synthesis \\[[PDF](https://arxiv.org/pdf/2207.09814.pdf), code \\]\n\n*[arxiv 2023.1; Google] **---Muse---** Muse: Text-To-Image Generation via Masked Generative Transformers [[PDF](https://arxiv.org/abs/2301.00704), [Page](https://muse-model.github.io/)]\n\n[arxiv 2023.1]Attribute-Centric Compositional Text-to-Image Generation [[PDF](https://arxiv.org/pdf/2301.01413.pdf)]\n\n[arxiv 2023.01]**---StyleGAN-T---** StyleGAN-T: Unlocking the Power of GANs for Fast Large-Scale Text-to-Image Synthesis [[PDF](https://arxiv.org/abs/2301.09515), [Page](https://sites.google.com/view/stylegan-t/)]\n\n[arxiv 2023.03]Scaling up GANs for Text-to-Image Synthesis[[PDF](https://arxiv.org/abs/2303.05511), [Page](https://mingukkang.github.io/GigaGAN/)]\n\n[arxiv 2023.10]PIXART-α: Fast Training of Diffusion Transformer for Photorealistic Text-to-Image Synthesis [[PDF](https://arxiv.org/abs/2310.00426), [Page](https://pixart-alpha.github.io/)]\n\n[arxiv 2024.08]VAR-CLIP: Text-to-Image Generator with Visual Auto-Regressive Modeling [[PDF](https://arxiv.org/abs/2408.01181),[Page](https://github.com/daixiangzi/VAR-CLIP)]\n\n[arxiv 2024.09]MaskBit: Embedding-free Image Generation via Bit Token [[PDF](https://arxiv.org/abs/2409.16211),[Page](https://weber-mark.github.io/projects/maskbit.html)]\n\n\n[arxiv 2025.02]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)\n\n\n## autoregressive \n[arxiv 2023.07]Scaling Autoregressive Multi-Modal Models: Pretraining and Instruction Tuning [[PDF](https://ai.meta.com/research/publications/scaling-autoregressive-multi-modal-models-pretraining-and-instruction-tuning/)]\n\n[arxiv 2024.04]Visual Autoregressive Modeling: Scalable Image Generation via Next-Scale Prediction [[PDF](https://arxiv.org/abs/2404.02905), [Page](https://github.com/FoundationVision/VAR)]\n\n[arxiv 2024.06]Autoregressive Model Beats Diffusion: Llama for Scalable Image Generation[[PDF](https://arxiv.org/abs/2406.06525), [Page](https://github.com/FoundationVision/LlamaGen)]\n\n[arxiv 2024.06]Autoregressive Image Generation without Vector Quantization [[PDF](https://arxiv.org/abs/2406.11838),]\n\n[arxiv 2024.07]MARS: Mixture of Auto-Regressive Models for Fine-grained Text-to-image Synthesis [[PDF](https://arxiv.org/abs/2407.07614),[Page](https://github.com/fusiming3/MARS)]\n\n[arxiv 2024.08]Scalable Autoregressive Image Generation with Mamba [[PDF](https://arxiv.org/abs/2408.12245),[Page](https://github.com/hp-l33/AiM)]\n\n[arxiv 2024.10] Meissonic: Revitalizing Masked Generative Transformers for Efficient High-Resolution Text-to-Image Synthesis[[PDF](https://arxiv.org/abs/2410.08261),[Page](https://huggingface.co/MeissonFlow/Meissonic)]\n\n[arxiv 2024.10]  DART: Denoising Autoregressive Transformer for Scalable Text-to-Image Generation [[PDF](https://arxiv.org/abs/2410.08159)]\n\n[arxiv 2024.10] Fluid: Scaling Autoregressive Text-to-image Generative Models with Continuous Tokens[[PDF](https://arxiv.org/abs/2410.13863)]\n\n[arxiv 2025.02]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)\n\n\n\n\n# Generation & Super-resolution \n\n[TPAMI 2022; Google ] Image Super-Resolution via Iterative Refinement \\[[PDF](https://arxiv.org/pdf/2104.07636.pdf), [Code](https://github.com/Janspiry/Image-Super-Resolution-via-Iterative-Refinement)\\]\n\n[CVPR 2022; POSTECH ]Autoregressive Image Generation using Residual Quantization \\[[PDF](https://arxiv.org/pdf/2203.01941.pdf), [Code](https://github.com/kakaobrain/rq-vae-transformer)\\]\n\n[SIGGRAPH 2022; Goolge ] **---Palette---** Palette: Image-to-Image Diffusion Models\\[[PDF](https://arxiv.org/pdf/2111.05826.pdf), [Code](https://github.com/Janspiry/Palette-Image-to-Image-Diffusion-Models)\\]\n\n[arxiv 2022; Google] Cascaded Diffusion Models for High Fidelity Image Generation\\[[PDF](https://arxiv.org/pdf/2106.15282.pdf), Code\\]\n\n[arxiv 2023.06]Designing a Better Asymmetric VQGAN for StableDiffusion [[PDF](https://arxiv.org/abs/2306.04632), [code](https://github.com/buxiangzhiren/Asymmetric_VQGAN)]\n\n## Scene \n[arxiv 2022.12]Generative Scene Synthesis via Incremental View Inpainting using RGBD Diffusion Models [[PDF](https://arxiv.org/pdf/2212.05993.pdf)]\n\n[arxiv 2022.12]Benchmarking Spatial Relationships in Text-to-Image Generation [[PDF](https://arxiv.org/pdf/2212.10015.pdf)]\n\n\n# privacy \n[arxiv 2023.02, DeepMind]Differentially Private Diffusion Models Generate Useful Synthetic Images [[PDF](https://arxiv.org/abs/2302.13861)]\n\n\n# Transformer Related \n*[ICLR 2022, Google]**---ViT-VQGAN---** Vector-quantized Image Modeling with Improved VQGAN [[PDF](https://arxiv.org/abs/2110.04627)]\n\n*[CVPR 2021, HEIDELBERG] **---VQGAN---** Taming transformers for high-resolution image synthesis[[PDF](https://arxiv.org/abs/2012.09841), [Page](https://compvis.github.io/taming-transformers/), [code](https://github.com/CompVis/taming-transformers)]\n\n*[arxiv 2022.02]MaskGIT: Masked Generative Image Transformer [[PDF](https://arxiv.org/pdf/2202.04200.pdf)]\n\n# Diffusion related \n*[arxiv 2022.12] Scalable Diffusion Models with Transformers [[PDF](https://arxiv.org/abs/2212.09748), [Page](https://www.wpeebles.com/DiT)]\n\n[arxiv 2024.01]Scalable High-Resolution Pixel-Space Image Synthesis with Hourglass Diffusion Transformers [[PDF](https://arxiv.org/abs/2401.11605),[Page](https://crowsonkb.github.io/hourglass-diffusion-transformers/)]\n\n\n## Benchmark \n[arxiv 2023.10]DEsignBench: Exploring and Benchmarking DALL-E 3 for Imagining Visual Design [[PDF](https://arxiv.org/abs/2310.15144), [Page](https://design-bench.github.io/)]\n\n\n\n\n## Study \n[arxiv 2023.02] A Pilot Evaluation of ChatGPT and DALL-E 2 on Decision Making and Spatial Reasoning [[PDF](https://arxiv.org/abs/2302.09068)]\n\n[arxiv 2023.02] Exploring the Representation Manifolds of Stable Diffusion Through the Lens of Intrinsic Dimension [[PDF](https://arxiv.org/abs/2302.09301)]\n\n[arxiv 2023.02]Fair Diffusion: Instructing Text-to-Image Generation Models on Fairness [[PDF](https://arxiv.org/abs/2302.10893)]\n\n[arxiv 2023.02] Unsupervised Discovery of Semantic Latent Directions in Diffusion Models [[PDF](https://arxiv.org/pdf/2302.12469.pdf)]\n\n[arxiv 2023.03]A Prompt Log Analysis of Text-to-Image Generation Systems [[PDF](https://arxiv.org/abs/2303.04587)]\n\n[arxiv 2023.10]A Picture is Worth a Thousand Words: Principled Recaptioning Improves Image Generation[[PDF](https://arxiv.org/abs/2310.16656)]\n\n[arxiv 2023.10]Davidsonian Scene Graph: Improving Reliability in Fine-grained Evaluation for Text-Image Generation\n[[PDF](https://arxiv.org/abs/2310.18235),[Page](https://google.github.io/DSG)]\n\n\n### Image caption \n[arxiv 2023.07]SITTA: A Semantic Image-Text Alignment for Image Captioning [[PDF](https://arxiv.org/abs/2307.05591), [Page](https://github.com/ml-jku/semantic-image-text-alignment)]\n"
  },
  {
    "path": "video-generation.md",
    "content": "# Video Generation Survey\nA reading list of video generation\n\n## Joint audio-video generation product\n*  **Veo3** [[Page](https://aistudio.google.com/models/veo-3)]\n*  **Sora2** [[Page](https://openai.com/index/sora-2/)]\n*  **Wan2.5 Preview** [[Page](https://wan.video/)]\n*  **Gaga** [[Page](https://gaga.art/app)]\n*  **Grok Imagine** [[Page](https://grok.com/imagine)]\n*  **Ovi** [[Page](https://github.com/character-ai/Ovi)] (opensource)\n*  **LTX-2** [[Page](https://github.com/Lightricks/LTX-2)] (opensource)\n\n## Repo for open-sora\n\n[2024.03] [HPC-AI Open-Sora](https://github.com/hpcaitech/Open-Sora) \n\n[2024.03] [PKU Open-Sora Plan](https://github.com/PKU-YuanGroup/Open-Sora-Plan)\n\n\n## Related surveys \n[Awesome-Video-Diffusion-Models](https://github.com/ChenHsing/Awesome-Video-Diffusion-Models?tab=readme-ov-file)\n\n[Awesome-Text-to-Image](https://github.com/Yutong-Zhou-cv/Awesome-Text-to-Image?tab=readme-ov-file#head-ti2i)\n\n## :point_right: Models to play with\n\n### Open source\n\n* **VideoCrafter/Floor33** [[Page](http://floor33.tech/)], [[Discord](https://discord.gg/rrayYqZ4tf)], [[Code & Models](https://github.com/AILab-CVC/VideoCrafter)]\n\n* **ModelScope** [[Page](https://modelscope.cn/models/damo/text-to-video-synthesis/summary), [i2v](https://modelscope.cn/models/damo/Image-to-Video/summary)], [[Code & Models](https://modelscope.cn/models/damo/text-to-video-synthesis/summary)]\n\n* **Hotshot-XL** [[Page](https://hotshot.co/)], [[Code & Models](https://github.com/hotshotco/Hotshot-XL)]\n\n* **AnimeDiff** [[Page](https://animatediff.github.io/), [Code & Models](https://github.com/guoyww/AnimateDiff)]\n\n* **Zeroscope V2 XL** [[Page](https://huggingface.co/cerspense/zeroscope_v2_XL)]\n\n* **MuseV** [[Page](https://github.com/TMElyralab/MuseV)] \n\n* **opensora plan** [[Page](https://github.com/PKU-YuanGroup/Open-Sora-Plan)]\n\n* **opensora** [[Page](https://github.com/hpcaitech/Open-Sora)]\n\n*  **easyanimate** [[Page](https://github.com/aigc-apps/EasyAnimate)]\n\n*  **Cogvideo X** [[Page](https://github.com/THUDM/CogVideo)]\n\n*  **Mochi from Genmo** [[Page](https://huggingface.co/genmo/mochi-1-preview#running)]\n\n*  **Hunyuan Video** [[Page](https://github.com/Tencent/HunyuanVideo)]\n\n\n\n### Non-open source\n\n* **Gen-1/Gen-2** [[Page](https://research.runwayml.com/gen2)]\n\n* **Pika Lab** [[Page](https://www.pika.art/)], [[Discord](http://discord.gg/pika)]\n\n* **Moonvalley** [[Page](https://moonvalley.ai/)], [[Discord](https://discord.gg/vk3aaH7r)]\n\n* **Leonard Ai** [[Page](https://leonardo.ai/)]\n\n* **Morph Studio** [[Page](https://www.morphstudio.xyz/)], [[Discord](https://discord.gg/hjd9JvXTU5)]\n  \n* **Lensgo** [[Page](https://lensgo.ai/), [Discord]()]\n\n* **Genmo** [[Page](https://www.genmo.ai/)]\n\n* **PlaiDay** [[Discord](https://discord.gg/6f6Q9pWb)]\n\n* **Nerverends** [[Page](https://neverends.life/create)]\n\n* **HiDream.ai/Pixeling** [[Page](https://hidream.ai/#/Pixeling)]\n\n* **Assistant++** [[Page](https://assistive.chat/video)]\n\n* **PixVerse**[[Page](https://pixverse.ai/)]\n\n* **ltx.studio**[[Page](https://ltx.studio/)]\n\n* **Haiper** [[Page](https://app.haiper.ai/explore)]\n\n* **vivago.ai**[[Page](https://vivago.ai/video)]\n\n* **智谱AI**[[Page](https://chatglm.cn/video)]\n\n### Translation \n* **Goenhance.ai**[[Page](https://www.goenhance.ai/)]\n\n* **ViggleAI**[[Page](https://t.co/2GMBpUOyHL)]\n\n\n\n## Databases\n\n* **HowTo100M**\n\n  [ICCV 2019] Learning a Text-Video Embedding by Watching Hundred Million Narrated Video Clips \\[[PDF](https://arxiv.org/pdf/1906.03327.pdf), [Project](https://www.di.ens.fr/willow/research/howto100m/) \\]\n\n* **HD-VILA-100M**\n  \n  [CVPR 2022]Advancing High-Resolution Video-Language Representation with Large-Scale Video Transcriptions [[PDF](https://openaccess.thecvf.com/content/CVPR2022/papers/Xue_Advancing_High-Resolution_Video-Language_Representation_With_Large-Scale_Video_Transcriptions_CVPR_2022_paper.pdf), [Page](https://github.com/microsoft/XPretrain/blob/main/hd-vila-100m/README.md)]\n  \n* **Web10M**\n\n  [ICCV 2021]Frozen in Time: A Joint Video and Image Encoder for End-to-End Retrieval \\[[PDF](https://arxiv.org/pdf/2104.00650.pdf), [Project](https://github.com/m-bain/webvid) \\]\n  \n* **UCF-101**\n\n  [arxiv 2012] Ucf101: A dataset of 101 human actions classes from videos in the wild \\[[PDF](https://arxiv.org/pdf/1212.0402.pdf), [Project](https://www.crcv.ucf.edu/data/UCF101.php) \\]\n  \n* **Sky Time-lapse** \n\n  [CVPR 2018] Learning to generate time-lapse videos using multi-stage dynamic generative adversarial networks \\[[PDF](https://openaccess.thecvf.com/content_cvpr_2018/papers/Xiong_Learning_to_Generate_CVPR_2018_paper.pdf), [Project](https://github.com/weixiong-ur/mdgan) \\]\n  \n* **TaiChi** \n\n  [NIPS 2019] First order motion model for image animation \\[ [PDF](https://papers.nips.cc/paper/2019/file/31c0b36aef265d9221af80872ceb62f9-Paper.pdf), [Project](https://github.com/AliaksandrSiarohin/first-order-model) \\]\n\n* **Celebv-text**\n  \n  [arxiv ]CelebV-Text: A Large-Scale Facial Text-Video Dataset [[PDF](), [Page](https://celebv-text.github.io/)]\n\n* **Youku-mPLUG**\n  \n  [arxiv 2023.06]Youku-mPLUG: A 10 Million Large-scale Chinese Video-Language Dataset for Pre-training and Benchmarks [[PDF](https://arxiv.org/abs/2306.04362)]\n\n* **InternVid**\n  \n  [arxiv 2023.07]InternVid: A Large-scale Video-Text Dataset for Multimodal Understanding and Generation [[PDF](https://arxiv.org/abs/2307.06942)]\n\n* **DNA-Rendering**\n  \n  [arxiv 2023.07] DNA-Rendering: A Diverse Neural Actor Repository for High-Fidelity Human-centric Rendering [[PDF](https://arxiv.org/abs/2307.10173)]\n\n* **Vimeo25M** (not open-source)\n  \n  [arxiv 2023.09] LAVIE: HIGH-QUALITY VIDEO GENERATION WITH CASCADED LATENT DIFFUSION MODELS [[PDF](https://arxiv.org/pdf/2309.15103.pdf)]\n\n* **HD-VG-130M**\n\n  [arxiv 2023.06]VideoFactory: Swap Attention in Spatiotemporal Diffusions for Text-to-Video Generation [[PDF](https://arxiv.org/abs/2305.10874), [Page](https://github.com/daooshee/HD-VG-130M)]\n\n* **Panda-70M**\n\n[arxiv 2024.06]ChronoMagic-Bench: A Benchmark for Metamorphic Evaluation of Text-to-Time-lapse Video Generation\n [[PDF](https://arxiv.org/abs/2406.18522), [Page](https://github.com/PKU-YuanGroup/ChronoMagic-Bench)]\n\n\n* **ChronoMagic-Pro**\n  \n* **OpenVid-1M**\n  [arxiv 2024.07] A Large-Scale Dataset for High-Quality Text-to-Video Generation  [[PDF](http://export.arxiv.org/pdf/2407.02371),[Page](https://nju-pcalab.github.io/projects/openvid/)]\n\n\n* **Koala-36M**\n[arxiv 2024.10]Koala-36M: A Large-scale Video Dataset Improving Consistency between Fine-grained Conditions and Video Content[[PDF](https://arxiv.org/abs/2410.08260),[Page](https://koala36m.github.io/)]\n\n\n  \n* **LVD-2M**\n  [arxiv 2024.10] LVD-2M: A Long-take Video Dataset with Temporally Dense Captions  [[PDF](https://arxiv.org/abs/2410.10816),[Page](https://github.com/SilentView/LVD-2M)]\n\n\n* **MovieBench**\n  [arxiv 2024.11]MovieBench: A Hierarchical Movie Level Dataset for Long Video Generation  [[PDF](https://weijiawu.github.io/MovieBench/),[Page](https://weijiawu.github.io/MovieBench/)] ![Code](https://img.shields.io/github/stars/showlab/MovieBench?style=social&label=Star)\n\n* **VIVID-10M**\n  [arxiv 2024.11]VIVID-10M: A Dataset and Baseline for Versatile and Interactive Video Local Editing [[PDF](https://arxiv.org/abs/2411.15260),[Page](https://inkosizhong.github.io/VIVID/)] \n\n* **OpenHumanVid**\n  [arxiv 2024.12]A Large-Scale High-Quality Dataset for Enhancing Human-Centric Video Generation [[PDF](https://arxiv.org/abs/2412.00115),[Page](https://inkosizhong.github.io/VIVID/)]  ![Code](https://img.shields.io/github/stars/fudan-generative-vision/OpenHumanVid?style=social&label=Star)\n\n* **Se\\~norita-2M**\n  [arxiv 2025.02]  Se\\~norita-2M: A High-Quality Instruction-based Dataset for General Video Editing by Video Specialists [[PDF](https://arxiv.org/abs/2502.06734),[Page](https://senorita.github.io/)] \n\n\n* **VideoUFO**\n  [arxiv 2025.03]  VideoUFO: A Million-Scale User-Focused Dataset for Text-to-Video Generation [[PDF](https://arxiv.org/pdf/2503.01739),[Page](https://huggingface.co/datasets/WenhaoWang/VideoUFO)] \n\n* **HOIGen-1M**\n  [arxiv 2025.04]  HOIGen-1M: A Large-scale Dataset for Human-Object Interaction Video Generation [[PDF](https://arxiv.org/abs/2503.23715),[Page](https://liuqi-creat.github.io/HOIGen.github.io/)] \n\n* **UltraVideo**\n  [arxiv 2025.06]  UltraVideo: High-Quality UHD Video Dataset with Comprehensive Captions [[PDF](https://arxiv.org/abs/2506.13691),[Page](https://xzc-zju.github.io/projects/UltraVideo/)] \n\n\n* **Sekai: worlk exploration**\n  [arxiv 2025.06]  Sekai: A Video Dataset towards World Exploration[[PDF](https://arxiv.org/pdf/2506.15675),[Page](https://lixsp11.github.io/sekai-project/)] \n\n\n* **Phantom-Data**\n  [arxiv 2025.06]  Phantom-Data: Towards a General Subject-Consistent Video Generation Dataset[[PDF](https://arxiv.org/pdf/2506.18851),[Page]([https://lixsp11.github.io/sekai-project/](https://phantom-video.github.io/Phantom-Data/))]  ![Code](https://img.shields.io/github/stars/Phantom-video/Phantom-Data?style=social&label=Star)\n\n* **CI-VID interleaved Text-Video Dataset**\n [arxiv 2025.07] CI-VID: A Coherent Interleaved Text-Video Dataset  [[PDF](https://arxiv.org/abs/2507.01938),[Page](https://github.com/ymju-BAAI/CI-VID)] ![Code](https://img.shields.io/github/stars/ymju-BAAI/CI-VID?style=social&label=Star)\n\n\n* **SpeakerVid-5M**\n[arxiv 2025.07] SpeakerVid-5M: A Large-Scale High-Quality Dataset for audio-visual Dyadic Interactive Human Generation [[PDF](https://arxiv.org/pdf/2507.09862),[Page](https://dorniwang.github.io/SpeakerVid-5M/)]\n\n\n* **SpatialVID**\n[arxiv 2025.07] SpatialVID: A Large-Scale Video Dataset with Spatial Annotations [[PDF](https://arxiv.org/abs/2509.09676),[Page](https://nju-3dv.github.io/projects/SpatialVID/)]  ![Code](https://img.shields.io/github/stars/NJU-3DV/spatialVID?style=social&label=Star)\n\n* **TalkCuts**\n[arxiv 2025.10] TalkCuts: A Large-Scale Dataset for Multi-Shot Human Speech Video Generation  [[PDF](https://arxiv.org/abs/2510.07249),[Page](https://talkcuts.github.io/)] \n\n* **Ditto-1M for Editing**\n[arxiv 2025.10] Scaling Instruction-Based Video Editing with a High-Quality Synthetic Dataset  [[PDF](https://arxiv.org/abs/2510.15742),[Page](https://editto.net/)] ![Code](https://img.shields.io/github/stars/EzioBy/Ditto?style=social&label=Star)\n\n\n* **Action100M**\n[arxiv 2026.01] Action100M: A Large-scale Video Action Dataset  [[PDF](https://arxiv.org/pdf/2601.10592),[Page](https://github.com/facebookresearch/Action100M)] ![Code](https://img.shields.io/github/stars/facebookresearch/Action100M?style=social&label=Star)\n\n\n* **Ego-1K**\n[arxiv 2026.03] Ego-1K -- A Large-Scale Multiview Video Dataset for Egocentric Vision  [[PDF](https://arxiv.org/abs/2603.13741),[Page](https://huggingface.co/datasets/facebook/ego-1k)]\n\n[arxiv 2026.03]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)\n\n\n## VAE\n[arxiv 2024.05]CV-VAE: A Compatible Video VAE for Latent Generative Video Models [[PDF](https://arxiv.org/abs/2405.20279),[Page](https://ailab-cvc.github.io/cvvae/index.html)] ![Code](https://img.shields.io/github/stars/AILab-CVC/CV-VAE?style=social&label=Star)\n\n[arxiv 2024.06]OmniTokenizer: A Joint Image-Video Tokenizer for Visual Generation[[PDF](https://arxiv.org/abs/2406.09399),[Page](https://github.com/FoundationVision/OmniTokenizer)] ![Code](https://img.shields.io/github/stars/FoundationVision/OmniTokenizer?style=social&label=Star)\n\n[arxiv 2024.09] OD-VAE: An Omni-dimensional Video Compressor for Improving Latent Video Diffusion Model [[PDF](https://arxiv.org/abs/2409.01199),[Page](https://github.com/PKU-YuanGroup/Open-Sora-Plan)] ![Code](https://img.shields.io/github/stars/PKU-YuanGroup/Open-Sora-Plan?style=social&label=Star)\n\n[arxiv 2024.10] MotionAura: Generating High-Quality and Motion Consistent Videos using Discrete Diffusion [[PDF](https://arxiv.org/abs/2410.07659),[Page](https://researchgroup12.github.io/Abstract_Diagram.html)]\n\n[arxiv 2024.10] Deep Compression Autoencoder for Efficient High-Resolution Diffusion Models [[PDF](https://arxiv.org/abs/2410.10733),[Page](https://github.com/mit-han-lab/efficientvit)] ![Code](https://img.shields.io/github/stars/mit-han-lab/efficientvit?style=social&label=Star)\n\n[arxiv 2024.11] Cosmos Tokenizer: A suite of image and video neural tokenizers. [[PDF](),[Page](https://github.com/NVIDIA/Cosmos-Tokenizer)] ![Code](https://img.shields.io/github/stars/NVIDIA/Cosmos-Tokenizer?style=social&label=Star)\n\n[arxiv 2024.11] Improved Video VAE for Latent Video Diffusion Model [[PDF](https://arxiv.org/abs/2411.06449),[Page](https://wpy1999.github.io/IV-VAE/)] \n\n[arxiv 2024.11] REDUCIO! Generating 1024×1024 Video within 16 Seconds using Extremely Compressed Motion Latents [[PDF](https://arxiv.org/abs/2411.13552),[Page](https://github.com/microsoft/Reducio-VAE)] [![Code](https://img.shields.io/github/stars/microsoft/Reducio-VAE?style=social&label=Star)](https://github.com/microsoft/Reducio-VAE)\n\n[arxiv 2024.11] Factorized Visual Tokenization and Generation [[PDF](https://arxiv.org/abs/2411.16681),[Page](https://showlab.github.io/FQGAN/)] ![Code](https://img.shields.io/github/stars/showlab/FQGAN?style=social&label=Star)\n\n[arxiv 2024.11] WF-VAE: Enhancing Video VAE by Wavelet-Driven Energy Flow for Latent Video Diffusion Model [[PDF](https://arxiv.org/abs/2411.17459),[Page](https://github.com/PKU-YuanGroup/WF-VAE)] ![Code](https://img.shields.io/github/stars/PKU-YuanGroup/WF-VAE?style=social&label=Star)\n\n[arxiv 2024.12] Four-Plane Factorized Video Autoencoders [[PDF](https://arxiv.org/abs/2412.04452),[Page](https://arxiv.org/abs/2412.04452)] \n\n[arxiv 2024.12] VidTok: A Versatile and Open-Source Video Tokenizer [[PDF](https://arxiv.org/abs/2412.13061),[Page](https://github.com/microsoft/VidTok)] ![Code](https://img.shields.io/github/stars/microsoft/VidTok?style=social&label=Star)\n\n[arxiv 2024.12] Scaling 4D Representations  [[PDF](https://arxiv.org/abs/2412.15212)]\n\n[arxiv 2024.12] Large Motion Video Autoencoding with Cross-modal Video VAE [[PDF](https://arxiv.org/abs/2412.17805),[Page](https://yzxing87.github.io/vae/)] ![Code](https://img.shields.io/github/stars/VideoVAEPlus?style=social&label=Star)\n\n[arxiv 2024.12] VidTwin: Video VAE with Decoupled Structure and Dynamics [[PDF](https://arxiv.org/abs/2412.17726),[Page](https://github.com/microsoft/VidTok)] ![Code](https://img.shields.io/github/stars/microsoft/VidTok?style=social&label=Star)\n\n[arxiv 2025.01] Learnings from Scaling Visual Tokenizers for Reconstruction and Generation [[PDF](https://arxiv.org/abs/2501.09755),[Page](https://vitok.github.io/)] \n\n[arxiv 2025.02]  DLFR-VAE: Dynamic Latent Frame Rate VAE for Video Generation [[PDF](http://arxiv.org/abs/2502.11897),[Page](https://github.com/thu-nics/DLFR-VAE)] ![Code](https://img.shields.io/github/stars/thu-nics/DLFR-VAE?style=social&label=Star)\n\n[arxiv 2025.03]  Alias-Free Latent Diffusion Models: Improving Fractional Shift Equivariance of Diffusion Latent Space [[PDF](https://arxiv.org/pdf/2503.09419),[Page](https://github.com/SingleZombie/AFLDM)] ![Code](https://img.shields.io/github/stars/SingleZombie/AFLDM?style=social&label=Star)\n\n[arxiv 2025.03] HiTVideo: Hierarchical Tokenizers for Enhancing Text-to-Video Generation with Autoregressive Large Language Models  [[PDF](https://arxiv.org/pdf/2503.11513),[Page](https://ziqinzhou66.github.io/project/HiTVideo)] \n\n[arxiv 2025.04]  VGDFR: Diffusion-based Video Generation with Dynamic Latent Frame Rate [[PDF](https://arxiv.org/pdf/2504.12259),[Page](https://github.com/thu-nics/VGDFR)] ![Code](https://img.shields.io/github/stars/thu-nics/VGDFR?style=social&label=Star)\n\n[arxiv 2025.04] D2iT: Dynamic Diffusion Transformer for Accurate Image Generation  [[PDF](https://arxiv.org/pdf/2504.09454),[Page](https://github.com/jiawn-creator/Dynamic-DiT)] ![Code](https://img.shields.io/github/stars/jiawn-creator/Dynamic-DiT?style=social&label=Star)\n\n[arxiv 2025.06]  Hi-VAE: Efficient Video Autoencoding with Global and Detailed Motion [[PDF](https://arxiv.org/pdf/2506.07136)]\n\n[arxiv 2025.08]  OneVAE: Joint Discrete and Continuous Optimization Helps Discrete Video VAE Train Better [[PDF](https://arxiv.org/abs/2508.09857),[Page](https://github.com/HVision-NKU/OneVAE)] ![Code](https://img.shields.io/github/stars/HVision-NKU/OneVAE?style=social&label=Star)\n\n[arxiv 2025.09]  AToken: A Unified Tokenizer for Vision [[PDF](https://arxiv.org/abs/2509.14476),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)\n\n[arxiv 2026.01] Adaptive 1D Video Diffusion Autoencoder  [[PDF](https://arxiv.org/pdf/2602.04220)]\n\n[arxiv 2026.01]  VTok: A Unified Video Tokenizer with Decoupled Spatial-Temporal Latents [[PDF](https://arxiv.org/abs/2602.04202),[Page](https://wangf3014.github.io/VTok_page/)] ![Code](https://img.shields.io/github/stars/wangf3014/VTok?style=social&label=Star)\n\n[arxiv 2026.02] Flash-VAED: Plug-and-Play VAE Decoders for Efficient Video Generation  [[PDF](https://arxiv.org/abs/2602.19161),[Page](https://github.com/Aoko955/Flash-VAED)] ![Code](https://img.shields.io/github/stars/Aoko955/Flash-VAED?style=social&label=Star)\n\n[arxiv 2026.03] RAC: Rectified Flow Auto Coder  [[PDF](https://arxiv.org/abs/2603.05925),[Page](https://world-snapshot.github.io/RAC/)] ![Code](https://img.shields.io/github/stars/World-Snapshot/RAC?style=social&label=Star)\n\n[arxiv 2026.03] EVATok: Adaptive Length Video Tokenization for Efficient Visual Autoregressive Generation  [[PDF](https://arxiv.org/abs/2603.12267),[Page](https://silentview.github.io/EVATok/)]\n\n[arxiv 2026.03]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)\n\n\n## Tokenizer \n\n[arxiv 2024.12] Divot: Diffusion Powers Video Tokenizer for Comprehension and Generation [[PDF](https://arxiv.org/abs/2412.04432),[Page](https://github.com/TencentARC/Divot)] ![Code](https://img.shields.io/github/stars/TencentARC/Divot?style=social&label=Star)\n\n[arxiv 2024.12] TokenFlow: Unified Image Tokenizer for Multimodal Understanding and Generation  [[PDF](https://arxiv.org/abs/2412.03069),[Page](https://byteflow-ai.github.io/TokenFlow/)] ![Code](https://img.shields.io/github/stars/ByteFlow-AI/TokenFlow?style=social&label=Star) \n\n[arxiv 2024.12] V2PE: Improving Multimodal Long-Context Capability of Vision-Language Models with Variable Visual Position Encoding [[PDF](https://arxiv.org/abs/2412.09616),[Page](https://github.com/OpenGVLab/PVC)] ![Code](https://img.shields.io/github/stars/OpenGVLab/PVC?style=social&label=Star)\n\n[arxiv 2024.12] Spectral Image Tokenizer [[PDF](https://arxiv.org/abs/2412.09607)]\n\n[arxiv 2024.12] Dynamic-VLM: Simple Dynamic Visual Token Compression for VideoLLM [[PDF](https://arxiv.org/abs/2412.09530) ]\n\n[arxiv 2025.01] LLaVA-Mini: Efficient Image and Video Large Multimodal Models with One Vision Token [[PDF](https://arxiv.org/abs/2501.03895),[Page](https://huggingface.co/ICTNLP/llava-mini-llama-3.1-8b)] ![Code](https://img.shields.io/github/stars/LLaVA-Mini?style=social&label=Star)\n\n[arxiv 2025.02] FlexTok: Resampling Images into 1D Token Sequences of Flexible Length  [[PDF](https://arxiv.org/abs/2502.13967),[Page](https://flextok.epfl.ch/)] \n\n[arxiv 2025.05] Learning Adaptive and Temporally Causal Video Tokenization in a 1D Latent Space  [[PDF](https://arxiv.org/abs/2505.17011),[Page](https://github.com/VisionXLab/AdapTok)] ![Code](https://img.shields.io/github/stars/VisionXLab/AdapTok?style=social&label=Star)\n\n[arxiv 2025.05] VFRTok: Variable Frame Rates Video Tokenizer with Duration-Proportional Information Assumption  [[PDF](https://arxiv.org/abs/2505.12053)]\n\n[arxiv 2025.07] REFTOK: Reference-Based Tokenization for Video Generation  [[PDF](https://arxiv.org/pdf/2507.02862)]\n\n[arxiv 2025.07] MambaVideo for Discrete Video Tokenization with Channel-Split Quantization  [[PDF](https://arxiv.org/abs/2507.04559),[Page](https://research.nvidia.com/labs/dir/mamba-tokenizer/)] \n\n[arxiv 2025.12] Towards Scalable Pre-training of Visual Tokenizers for Generation  [[PDF](https://arxiv.org/abs/2512.13687),[Page](https://github.com/MiniMax-AI/VTP)] ![Code](https://img.shields.io/github/stars/MiniMax-AI/VTP?style=social&label=Star)\n\n\n[arxiv 2026.03]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)\n\n\n## GAN/VAE-based methods \n[NIPS 2016] **---VGAN---** Generating Videos with Scene Dynamics \\[[PDF](https://proceedings.neurips.cc/paper/2016/file/04025959b191f8f9de3f924f0940515f-Paper.pdf), [code](https://github.com/cvondrick/videogan) \\]\n\n[ICCV 2017] **---TGAN---** Temporal Generative Adversarial Nets with Singular Value Clipping \\[[PDF](https://arxiv.org/pdf/1611.06624.pdf), [code](https://github.com/pfnet-research/tgan) \\]\n\n[CVPR 2018] **---MoCoGAN---** MoCoGAN: Decomposing Motion and Content for Video Generation \\[[PDF](https://arxiv.org/pdf/1707.04993.pdf), [code](https://github.com/sergeytulyakov/mocogan) \\]\n\n[NIPS 2018] **---SVG---** Stochastic Video Generation with a Learned Prior \\[[PDF](https://proceedings.mlr.press/v80/denton18a/denton18a.pdf), [code](https://github.com/edenton/svg) \\]\n\n[ECCV 2018] Probabilistic Video Generation using\nHolistic Attribute Control \\[[PDF](https://openaccess.thecvf.com/content_ECCV_2018/papers/Jiawei_He_Probabilistic_Video_Generation_ECCV_2018_paper.pdf), code\\]\n\n[CVPR 2019; CVL ETH] **---SWGAN---** Sliced Wasserstein Generative Models \\[[PDF](https://openaccess.thecvf.com/content_CVPR_2019/papers/Wu_Sliced_Wasserstein_Generative_Models_CVPR_2019_paper.pdf), [code](https://github.com/skolouri/swae) \\]\n\n[NIPS 2019; NVLabs] **---vid2vid---** Few-shot Video-to-Video Synthesis \\[[PDF](https://nvlabs.github.io/few-shot-vid2vid/main.pdf), [code](https://github.com/NVlabs/few-shot-vid2vid) \\]\n\n[arxiv 2020; Deepmind] **---DVD-GAN---** ADVERSARIAL VIDEO GENERATION ON COMPLEX DATASETS \\[[PDF](https://arxiv.org/pdf/1907.06571.pdf), [code](https://github.com/Harrypotterrrr/DVD-GAN) \\]\n\n[IJCV 2020] **---TGANv2---** Train Sparsely, Generate Densely: Memory-efficient Unsupervised Training of High-resolution Temporal GAN \\[[PDF](https://arxiv.org/pdf/1811.09245.pdf), [code](https://github.com/pfnet-research/tgan2) \\]\n\n[PMLR 2021] **---TGANv2-ODE---** Latent Neural Differential Equations for Video Generation \\[[PDF](https://arxiv.org/pdf/2011.03864.pdf), [code](https://github.com/Zasder3/Latent-Neural-Differential-Equations-for-Video-Generation) \\]\n\n[ICLR 2021 ] **---DVG---** Diverse Video Generation using a Gaussian Process Trigger \\[[PDF](https://openreview.net/pdf?id=Qm7R_SdqTpT), [code](https://github.com/shgaurav1/DVG) \\]\n\n[Arxiv 2021; MRSA] **---GODIVA---** GODIVA: Generating Open-DomaIn Videos from nAtural Descriptions \\[[PDF]([https://arxiv.org/pdf/2205.15868.pdf](https://arxiv.org/pdf/2104.14806.pdf)), [code](https://github.com/sihyun-yu/digan) \\]\n\n*[CVPR 2022 ] **---StyleGAN-V--** StyleGAN-V: A Continuous Video Generator with the Price, Image Quality and Perks of StyleGAN2 \\[[PDF](https://arxiv.org/pdf/2112.14683.pdf), [code](https://github.com/universome/stylegan-v) \\]\n\n*[NeurIPs 2022] **---MCVD---** MCVD: Masked Conditional Video Diffusion for Prediction, Generation, and Interpolation [[PDF](https://arxiv.org/abs/2205.09853), [code](https://github.com/voletiv/mcvd-pytorch)]\n\n## :point_right: Implicit Neural Representations\n[ICLR 2022] Generating videos with dynamics-aware implicit generative adversarial networks \\[[PDF](https://openreview.net/pdf?id=Czsdv-S4-w9), [code]() \\]\n\n## Transformer-based \n[arxiv 2021] **---VideoGPT--** VideoGPT: Video Generation using VQ-VAE and Transformers \\[[PDF](https://arxiv.org/pdf/2104.10157.pdf), [code](https://github.com/wilson1yan/VideoGPT) \\]\n\n[ECCV 2022; Microsoft] **---NÜWA--** NÜWA: Visual Synthesis Pre-training for Neural visUal World creAtion \\[[PDF](https://arxiv.org/pdf/2111.12417.pdf), code \\]\n\n[NIPS 2022; Microsoft] **---NÜWA-Infinity--** NUWA-Infinity: Autoregressive over Autoregressive Genera#tion for Infinite Visual Synthesis \\[[PDF](https://arxiv.org/pdf/2207.09814.pdf), code \\]\n\n[Arxiv 2020; Tsinghua] **---CogVideo--** CogVideo: Large-scale Pretraining for Text-to-Video Generation via Transformers \\[[PDF](https://arxiv.org/pdf/2205.15868.pdf), [code](https://github.com/THUDM/CogVideo) \\]\n\n*[ECCV 2022] **---TATS--** Long Video Generation with Time-Agnostic VQGAN and Time-Sensitive Transformer \\[[PDF](https://arxiv.org/pdf/2204.03638.pdf), [code](https://github.com/SongweiGe/TATS)\\]\n\n\n*[arxiv 2022; Google] **---PHENAKI--** PHENAKI: VARIABLE LENGTH VIDEO GENERATION FROM OPEN DOMAIN TEXTUAL DESCRIPTIONS \\[[PDF](https://arxiv.org/pdf/2210.02399.pdf), code \\]\n\n[arxiv 2022.12]MAGVIT: Masked Generative Video Transformer[[PDF](https://arxiv.org/pdf/2212.05199.pdf)]\n\n[arxiv 2023.11]Optimal Noise pursuit for Augmenting Text-to-Video Generation [[PDF](https://arxiv.org/abs/2311.00949)]\n\n[arxiv 2024.01]WorldDreamer: Towards General World Models for Video Generation via Predicting Masked Tokens [[PDF](https://arxiv.org/abs/2401.09985),[Page](https://world-dreamer.github.io/)]\n\n[arxiv 2024.10] Loong: Generating Minute-level Long Videos with Autoregressive Language Models [[PDF](https://arxiv.org/abs/2410.02757), [Page](https://epiphqny.github.io/Loong-video/)]\n\n[arxiv 2024.10] LARP: Tokenizing Videos with a Learned Autoregressive Generative Prior  [[PDF](https://arxiv.org/abs/2410.21264),[Page](https://hywang66.github.io/larp/)]\n\n\n[arxiv 2026.03]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)\n\n\n\n## Diffusion-based methods \n*[NIPS 2022; Google] **---VDM--**  Video Diffusion Models \\[[PDF](https://arxiv.org/pdf/2204.03458.pdf), [code](https://github.com/lucidrains/video-diffusion-pytorch) \\]\n\n*[arxiv 2022; Meta] **---MAKE-A-VIDEO--** MAKE-A-VIDEO: TEXT-TO-VIDEO GENERATION WITHOUT TEXT-VIDEO DATA \\[[PDF](https://arxiv.org/pdf/2209.14792.pdf), code \\]\n\n*[arxiv 2022; Google] **---IMAGEN VIDEO--** IMAGEN VIDEO: HIGH DEFINITION VIDEO GENERATION WITH DIFFUSION MODELS \\[[PDF](https://arxiv.org/pdf/2210.02303.pdf), code \\]\n\n*[arxiv 2022; ByteDace] ***MAGIC VIDEO***:Efficient Video Generation With Latent Diffusion Models \\[[PDF](https://arxiv.org/pdf/2211.11018.pdf), code\\]\n\n*[arxiv 2022; Tencent] ***LVDM*** Latent Video Diffusion Models for High-Fidelity Video Generation with Arbitrary Lengths  \\[[PDF](https://arxiv.org/pdf/2211.13221.pdf), code\\]\n\n[AAAI 2022; JHU ] VIDM: Video Implicit Diffusion Model \\[[PDF](https://kfmei.page/vidm/Video_implicit_diffusion_models.pdf)\\]\n\n[arxiv 2023.01; Meta] Text-To-4D Dynamic Scene Generation [[PDF](https://arxiv.org/pdf/2301.11280.pdf), [Page](https://make-a-video3d.github.io/)]\n\n[arxiv 2023.03]Video Probabilistic Diffusion Models in Projected Latent Space [[PDF](https://arxiv.org/abs/2302.07685), [Page](https://sihyun.me/PVDM/)]\n\n[arxiv 2023.03]Controllable Video Generation by Learning the Underlying Dynamical System with Neural ODE [[PDF](https://arxiv.org/abs/2303.05323)]\n\n[arxiv 2023.03]Decomposed Diffusion Models for High-Quality Video Generation [[PDF](https://arxiv.org/pdf/2303.08320.pdf)]\n\n[arxiv 2023.03]NUWA-XL: Diffusion over Diffusion for eXtremely Long Video Generation [[PDF](https://arxiv.org/abs/2303.12346)]\n\n*[arxiv 2023.04]Latent-Shift: Latent Diffusion with Temporal Shift for Efficient Text-to-Video Generation [[PDF](https://arxiv.org/abs/2304.08477)]\n\n*[arxiv 2023.04]Align your Latents: High-Resolution Video Synthesis with Latent Diffusion Models [[PDF](https://arxiv.org/abs/2304.08818), [Page](https://research.nvidia.com/labs/toronto-ai/VideoLDM/)]\n\n[arxiv 2023.04]LaMD: Latent Motion Diffusion for Video Generation [[PDF](https://arxiv.org/abs/2304.11603)]\n\n*[arxiv 2023.05]Preserve Your Own Correlation: A Noise Prior for Video Diffusion Models[[PDF](https://arxiv.org/pdf/2305.10474.pdf), [Page](https://research.nvidia.com/labs/dir/pyoco/)]\n\n[arxiv 2023.05]VideoFactory: Swap Attention in Spatiotemporal Diffusions for Text-to-Video Generation [[PDF](https://arxiv.org/pdf/2305.10874.pdf)]\n\n[arxiv 2023.08]ModelScope Text-to-Video Technical Report [[PDF](https://arxiv.org/pdf/2308.06571.pdf)]\n\n[arxiv 2023.08]Dual-Stream Diffusion Net for Text-to-Video Generation [[PDF](https://huggingface.co/papers/2308.08316)]\n\n[arxiv 2023.08]SimDA: Simple Diffusion Adapter for Efficient Video Generation [[PDF](https://arxiv.org/abs/2308.09710), [Page](https://chenhsing.github.io/SimDA/)]\n\n[arxiv 2023.08]Dysen-VDM: Empowering Dynamics-aware Text-to-Video Diffusion with Large Language Models [[PDF](https://arxiv.org/pdf/2308.13812.pdf), [Page](https://haofei.vip/Dysen-VDM/)]\n\n[arxiv 2023.09]Reuse and Diffuse: Iterative Denoising for Text-to-Video Generation[[PDF](https://arxiv.org/pdf/2309.03549.pdf),[Page](https://anonymous0x233.github.io/ReuseAndDiffuse/)]\n\n[arxiv 2023.09]LAVIE: HIGH-QUALITY VIDEO GENERATION WITH CASCADED LATENT DIFFUSION MODELS [[PDF](https://arxiv.org/pdf/2309.15103.pdf), [Page](https://vchitect.github.io/LaVie-project/)]\n\n[arxiv 2023.09]VideoDirectorGPT: Consistent Multi-scene Video Generation via LLM-Guided Planning [[PDF](https://arxiv.org/abs/2309.15091), [Page](https://videodirectorgpt.github.io/)]\n\n[arxiv 2023.10]Show-1: Marrying Pixel and Latent Diffusion Models for Text-to-Video Generation [[PDF](https://arxiv.org/abs/2309.15818), [Page](https://showlab.github.io/Show-1)]\n\n[arxiv 2023.10]LLM-grounded Video Diffusion Models [[PDF](https://arxiv.org/abs/2309.17444),[Page](https://llm-grounded-video-diffusion.github.io/)]\n\n[arxiv 2023.10]VideoCrafter1: Open Diffusion Models for High-Quality Video Generation [[PDF](https://arxiv.org/abs/2310.19512),[Page](https://github.com/AILab-CVC/VideoCrafter)]\n\n[arxiv 2023.11]Make Pixels Dance: High-Dynamic Video Generation [[PDF](https://arxiv.org/abs/2311.10982), [Page](https://makepixelsdance.github.io/)]\n\n[arxiv 2023.11]Stable Video Diffusion: Scaling Latent Video Diffusion Models to Large Datasets[[PDF](https://static1.squarespace.com/static/6213c340453c3f502425776e/t/655ce779b9d47d342a93c890/1700587395994/stable_video_diffusion.pdf), [Page](https://t.co/P2lmq343cf)]\n\n[arxiv 2023.11]Kandinsky Video [[PDF](https://arxiv.org/abs/2311.13073),[Page](https://ai-forever.github.io/kandinsky-video/)]\n\n[arxiv 2023.12]GenDeF: Learning Generative Deformation Field for Video Generation [[PDF](https://arxiv.org/abs/2312.04561),[Page](https://aim-uofa.github.io/GenDeF/)]\n\n[arxiv 2023.12]GenTron: Delving Deep into Diffusion Transformers for Image and Video Generation [[PDF](https://arxiv.org/abs/2312.04557),[Page](https://www.shoufachen.com/gentron_website/)]\n\n[arxiv 2023.12]Hierarchical Spatio-temporal Decoupling for Text-to-Video Generation [[PDF](https://arxiv.org/abs/2312.04483), [Page](https://higen-t2v.github.io/)]\n\n[arxiv 2023.12]AnimateZero:Video Diffusion Models are Zero-Shot Image Animators [[PDF](https://arxiv.org/abs/2312.03793),[Page](https://vvictoryuki.github.io/animatezero.github.io/)]\n\n[arxiv 2023.12]Photorealistic Video Generation with Diffusion Models [[PDF](https://arxiv.org/abs/2312.06662),[Page](https://walt-video-diffusion.github.io/)]\n\n[arxiv 2023.12]A Recipe for Scaling up Text-to-Video Generation with Text-free Videos [[PDF](https://arxiv.org/abs/2312.15770),[Page](https://tf-t2v.github.io/)]\n\n[arxiv 2023.12]MagicVideo-V2: Multi-Stage High-Aesthetic Video Generation [[PDF](https://arxiv.org/pdf/2401.04468.pdf), [Page](https://magicvideov2.github.io/)]\n\n[arxiv 2024.1]Latte: Latent Diffusion Transformer for Video Generation [[PDF](https://arxiv.org/abs/2401.03048),[Page](https://maxin-cn.github.io/latte_project)]\n\n[arxiv 2024.1]VideoCrafter2: Overcoming Data Limitations for High-Quality Video Diffusion Models [[PDF](https://arxiv.org/abs/2401.09047),[Page](https://ailab-cvc.github.io/videocrafter2/)]\n\n[arxiv 2024.1]Lumiere: A Space-Time Diffusion Model for Video Generation [[PDF](https://arxiv.org/abs/2401.12945), [Page](https://lumiere-video.github.io/)]\n\n[arxiv 2024.02]Hybrid Video Diffusion Models with 2D Triplane and 3D Wavelet Representation [[PDF](https://arxiv.org/abs/2402.13729)]\n\n[arxiv 2024.02]Snap Video: Scaled Spatiotemporal Transformers for Text-to-Video Synthesis[[PDF](https://arxiv.org/abs/2402.14797),[Page](https://snap-research.github.io/snapvideo/)]\n\n[arxiv 2024.03]Mora: Enabling Generalist Video Generation via A Multi-Agent Framework[[PDF](https://arxiv.org/html/2403.13248v1)]\n\n[arxiv 2024.03]Efficient Video Diffusion Models via Content-Frame Motion-Latent Decomposition [[PDF](https://arxiv.org/abs/2403.14148),[Page](https://arxiv.org/abs/2403.14148)]\n\n[arxiv 2024.04]Grid Diffusion Models for Text-to-Video Generation [[PDF](https://arxiv.org/abs/2404.00234)]\n\n[arxiv 2024.04]MagicTime: Time-lapse Video Generation Models as Metamorphic Simulators [[PDF](https://arxiv.org/abs/2404.05014)]\n\n[arxiv 2024.05]Matten: Video Generation with Mamba-Attention [[PDF](https://arxiv.org/abs/2405.03025)]\n\n[arxiv 2024.05]Vidu: a Highly Consistent, Dynamic and Skilled Text-to-Video Generator with Diffusion Models [[PDF](https://arxiv.org/abs/2405.04233),[Page](https://www.shengshu-ai.com/vidu)]\n\n[arxiv 2024.05]Lumina-T2X: Transforming Text into Any Modality, Resolution, and Duration via Flow-based Large Diffusion Transformers [[PDF](https://arxiv.org/abs/2405.05945),[Page](https://github.com/Alpha-VLLM/Lumina-T2X)]\n\n[arxiv 2024.05] Scaling Diffusion Mamba with Bidirectional SSMs for Efficient Image and Video Generation [[PDF](https://arxiv.org/abs/2405.15881)]\n\n[arxiv 2024.06]Hierarchical Patch Diffusion Models for High-Resolution Video Generation [[PDF](https://arxiv.org/pdf/2406.07792), [Page](https://snap-research.github.io/hpdm)]\n\n[arxiv 2024.08] xGen-VideoSyn-1: High-fidelity Text-to-Video Synthesis with Compressed Representations[[PDF](https://arxiv.org/abs/2408.12590), [Page](https://github.com/SalesforceAIResearch/xgen-videosyn)]\n\n[arxiv 2024.10] Movie Gen: A Cast of Media Foundation Models [[PDF](https://ai.meta.com/static-resource/movie-gen-research-paper), [Page]()]\n\n[arxiv 2024.10] MotionAura: Generating High-Quality and Motion Consistent Videos using Discrete Diffusion [[PDF](https://arxiv.org/abs/2410.07659),[Page](https://researchgroup12.github.io/Abstract_Diagram.html)]\n\n[arxiv 2024.10] Redefining Temporal Modeling in Video Diffusion: The Vectorized Timestep Approach [[PDF](https://arxiv.org/abs/2410.03160), [Page](https://github.com/Yaofang-Liu/FVDM)]\n\n[arxiv 2024.10] MarDini: Masked Autoregressive Diffusion for Video Generation at Scale [[PDF](https://arxiv.org/abs/2410.20280), [Page](https://mardini-vidgen.github.io/)]\n\n[arxiv 2024.12] Open-Sora Plan: Open-Source Large Video Generation Model [[PDF](https://arxiv.org/abs/2412.00131),[Page](https://github.com/PKU-YuanGroup/Open-Sora-Plan)] ![Code](https://img.shields.io/github/stars/PKU-YuanGroup/Open-Sora-Plan?style=social&label=Star)\n\n[arxiv 2024.12] HunyuanVideo: A Systematic Framework For Large Video Generation Model [[PDF](https://arxiv.org/abs/2412.03603),[Page](https://github.com/Tencent/HunyuanVideo)] ![Code](https://img.shields.io/github/stars/Tencent/HunyuanVideo?style=social&label=Star)\n\n[arxiv 2025.01] Open-Sora: Democratizing Efficient Video Production for All  [[PDF](https://arxiv.org/pdf/2412.20404),[Page](https://github.com/hpcaitech/Open-Sora)] ![Code](https://img.shields.io/github/stars/hpcaitech/Open-Sora?style=social&label=Star)\n\n[arxiv 2025.02]  FlashVideo:Flowing Fidelity to Detail for Efficient High-Resolution Video Generation [[PDF](https://arxiv.org/abs/2502.05179),[Page](https://github.com/FoundationVision/FlashVideo)] ![Code](https://img.shields.io/github/stars/FoundationVision/FlashVideo?style=social&label=Star)\n\n[arxiv 2025.02] Goku: Flow Based Video Generative Foundation Models  [[PDF](https://arxiv.org/abs/2502.04896),[Page](https://saiyan-world.github.io/goku/)] ![Code](https://img.shields.io/github/stars/Saiyan-World/goku?style=social&label=Star)\n\n[arxiv 2025.02] Lumina-Video: Efficient and Flexible Video Generation with Multi-scale Next-DiT  [[PDF](https://arxiv.org/abs/2502.06782),[Page](https://github.com/Alpha-VLLM/Lumina-Video)] ![Code](https://img.shields.io/github/stars/Alpha-VLLM/Lumina-Video?style=social&label=Star)\n\n[arxiv 2025.02] Step-Video-T2V Technical Report: The Practice, Challenges, and Future of Video Foundation Model  [[PDF](https://arxiv.org/abs/2502.10248),[Page](https://yuewen.cn/videos)] ![Code](https://img.shields.io/github/stars/stepfun-ai/Step-Video-T2V?style=social&label=Star)\n\n[arxiv 2025.02] SkyReels V1: Human-Centric Video Foundation Model  [[Page](https://github.com/SkyworkAI/SkyReels-V1)] ![Code](https://img.shields.io/github/stars/SkyworkAI/SkyReels-V1?style=social&label=Star)\n\n[arxiv 2025.03] TPDiff: Temporal Pyramid Video Diffusion Model  [[PDF](https://arxiv.org/abs/2503.09566),[Page](https://showlab.github.io/TPDiff/)] ![Code](https://img.shields.io/github/stars/showlab/TPDiff?style=social&label=Star)\n\n[arxiv 2025.03]  Step-Video-TI2V Technical Report: A State-of-the-Art Text-Driven Image-to-Video Generation Model [[PDF](https://arxiv.org/abs/2503.11251),[Page](https://github.com/stepfun-ai/Step-Video-TI2V)] ![Code](https://img.shields.io/github/stars/stepfun-ai/Step-Video-TI2V?style=social&label=Star)\n\n[arxiv 2025.03]  Wan: Open and Advanced Large-Scale Video Generative Models [[PDF](https://arxiv.org/abs/2503.20314),[Page](https://github.com/Wan-Video/Wan2.1)] ![Code](https://img.shields.io/github/stars/Wan-Video/Wan2.1?style=social&label=Star)\n\n[arxiv 2025.04]  Seaweed-7B: Cost-Effective Training of Video Generation Foundation Model [[PDF](https://arxiv.org/abs/2504.08685),[Page](https://seaweed.video/)] \n\n[arxiv 2025.04] Turbo2K: Towards Ultra-Efficient and High-Quality 2K Video Synthesis  [[PDF](https://arxiv.org/abs/2504.14470),[Page](https://jingjingrenabc.github.io/turbo2k/)] \n\n[arxiv 2025.05] MAGI-1: Autoregressive Video Generation at Scale  [[PDF](https://arxiv.org/abs/2505.13211),[Page](https://github.com/SandAI-org/Magi-1)] ![Code](https://img.shields.io/github/stars/SandAI-org/Magi-1?style=social&label=Star)\n\n[arxiv 2025.06] Seedance 1.0: Exploring the Boundaries of Video Generation Models  [[PDF](https://arxiv.org/abs/2506.09113),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)\n\n[arxiv 2025.08] Waver: Wave Your Way to Lifelike Video Generation  [[PDF](https://arxiv.org/abs/2508.15761),[Page](https://github.com/FoundationVision/Waver)] \n\n[arxiv 2025.10] LongCat-Video Technical Report  [[PDF](https://arxiv.org/abs/2510.22200),[Page](https://github.com/meituan-longcat/LongCat-Video)] ![Code](https://img.shields.io/github/stars/meituan-longcat/LongCat-Video?style=social&label=Star)\n\n[arxiv 2025.11]  Kandinsky 5.0: A Family of Foundation Models for Image and Video Generation [[PDF](https://arxiv.org/abs/2511.14993),[Page](https://kandinskylab.ai/)] ![Code](https://img.shields.io/github/stars/kandinskylab/kandinsky-5?style=social&label=Star)\n\n[arxiv 2025.11]  HunyuanVideo 1.5 Technical Report [[PDF](https://arxiv.org/abs/2511.18870),[Page](https://github.com/Tencent-Hunyuan/HunyuanVideo-1.5)] ![Code](https://img.shields.io/github/stars/Tencent-Hunyuan/HunyuanVideo-1.5?style=social&label=Star)\n\n[arxiv 2025.12] Seedance 1.5 pro: A Native Audio-Visual Joint Generation Foundation Model  [[PDF](https://arxiv.org/abs/2512.13507),[Page](https://seed.bytedance.com/zh/seedance1_5_pro)] \n\n[arxiv 2026.02] TeleBoost: A Systematic Alignment Framework for High-Fidelity, Controllable, and Robust Video Generation  [[PDF](https://arxiv.org/abs/2602.07595)]\n\n[arxiv 2026.03]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)\n\n\n## autoregressive\n[arxiv 2025.11] InfinityStar: Unified Spacetime AutoRegressive Modeling for Visual Generation  [[PDF](https://arxiv.org/abs/2511.04675),[Page](https://github.com/FoundationVision/InfinityStar)] ![Code](https://img.shields.io/github/stars/FoundationVision/InfinityStar?style=social&label=Star)\n\n[arxiv 2025.11] Adaptive Begin-of-Video Tokens for Autoregressive Video Diffusion Models  [[PDF](https://arxiv.org/abs/2511.12099)]\n\n[arxiv 2026.03]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)\n\n\n\n## LLMs-based \n[arxiv 2023.12]VideoPoet: A Large Language Model for Zero-Shot Video Generation [[PDF](https://arxiv.org/abs/2312.14125),[Page](http://sites.research.google/videopoet/)]\n\n[arxiv 2024.02] Video-LaVIT: Unified Video-Language Pre-training with Decoupled Visual-Motional Tokenization [[PDF](https://arxiv.org/abs/2402.03161),[Page](https://video-lavit.github.io/)] ![Code](https://img.shields.io/github/stars/jy0205/LaVIT?style=social&label=Star)\n\n[arxiv 2025.07] Omni-Video: Democratizing Unified Video Understanding and Generation  [[PDF](https://arxiv.org/pdf/2507.06119),[Page](https://sais-fuxi.github.io/Omni-Video/)] \n\n\n[arxiv 2026.03]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)\n\n\n\n## DiT\n[arxiv 2024.05]  EasyAnimate: A High-Performance Long Video Generation Method based on Transformer Architecture [[PDF](https://arxiv.org/abs/2405.18991),[Page](https://github.com/aigc-apps/EasyAnimate)]\n\n[arxiv 2024.08] CogVideoX: Text-to-Video Diffusion Models with An Expert Transformer  [[PDF](https://arxiv.org/abs/2408.06072),[Page](https://github.com/THUDM/CogVideo)]\n\n[arxiv 2024.10] Allegro: Open the Black Box of Commercial-Level Video Generation Model  [[PDF](https://arxiv.org/abs/2410.15458),[Page](https://rhymes.ai/allegro_gallery)]\n\n[arxiv 2026.03]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)\n\n\n[arxiv 2026.03]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)\n\n\n## agent\n[arxiv 2025.10] VISTA: A Test-Time Self-Improving Video Generation Agent  [[PDF](https://arxiv.org/abs/2510.15831),[Page](https://g-vista.github.io/)] \n\n[arxiv 2025.11]  UniVA: Universal Video Agent towards Open-Source Next-Generation Video Generalist [[PDF](https://arxiv.org/pdf/2511.08521),[Page](https://univa.online/)] ![Code](https://img.shields.io/github/stars/univa-agent/univa?style=social&label=Star)\n\n[arxiv 2026.03]  VisionCreator: A Native Visual-Generation Agentic Model with Understanding, Thinking, Planning and Creation [[PDF](https://arxiv.org/pdf/2603.02681)]\n\n[arxiv 2026.03] CutClaw: Agentic Hours-Long Video Editing via Music Synchronization  [[PDF](https://arxiv.org/abs/2603.29664),[Page](https://github.com/GVCLab/CutClaw)] ![Code](https://img.shields.io/github/stars/GVCLab/CutClaw?style=social&label=Star)\n\n\n[arxiv 2026.03]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)\n\n\n\n## scaling law \n[arxiv 2024.11] Towards Precise Scaling Laws for Video Diffusion Transformers [[PDF](https://arxiv.org/abs/2411.17470)] \n\n\n## State Space-based \n[arxiv 2024.03]SSM Meets Video Diffusion Models: Efficient Video Generation with Structured State Spaces [[PDF](https://arxiv.org/abs/2403.07711),[Page](https://github.com/shim0114/SSM-Meets-Video-Diffusion-Models)]\n\n\n[arxiv 2026.03]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)\n\n\n\n## improve Video Diffusion models \n[arxiv 2023.10]ScaleCrafter: Tuning-free Higher-Resolution Visual Generation with Diffusion Models [[PDF](https://arxiv.org/abs/2310.07702), [Page](https://yingqinghe.github.io/scalecrafter/)]\n\n[arxiv 2023.10]FreeU: Free Lunch in Diffusion U-Net [[PDF](https://arxiv.org/pdf/2309.11497.pdf), [Page](https://chenyangsi.top/FreeU/)]\n\n[arxiv 2023.12]FreeInit: Bridging Initialization Gap in Video Diffusion Models [[PDF](https://arxiv.org/abs/2312.07537),[Page](https://tianxingwu.github.io/pages/FreeInit/)]\n\n[arxiv 2024.07] Video Diffusion Alignment via Reward Gradients [[PDF](https://arxiv.org/abs/2407.08737), [Page](https://github.com/mihirp1998/VADER)]\n\n[arxiv 2024.08] FancyVideo: Towards Dynamic and Consistent Video Generation via Cross-frame Textual Guidance[[PDF](https://arxiv.org/abs/2408.08189)]\n\n[arxiv 2024.09] S2AG-Vid: Enhancing Multi-Motion Alignment in Video Diffusion Models via Spatial and Syntactic Attention-Based Guidance [[PDF](https://arxiv.org/pdf/2409.15259)]\n\n[arxiv 2024.10] BroadWay: Boost Your Text-to-Video Generation Model in a Training-free Way [[PDF](https://arxiv.org/abs/2410.06241)]\n\n[arxiv 2024.10] Pyramidal Flow Matching for Efficient Video Generative Modeling [[PDF](https://arxiv.org/abs/2410.05954), [Page](https://pyramid-flow.github.io/)]\n\n[arxiv 2024.10] T2V-Turbo-v2: Enhancing Video Generation Model Post-Training through Data, Reward, and Conditional Guidance Design [[PDF](https://arxiv.org/abs/2410.05677), [Page](https://t2v-turbo-v2.github.io/)]\n\n[arxiv 2024.11] Enhancing Motion in Text-to-Video Generation with Decomposed Encoding and Conditioning [[PDF](https://arxiv.org/abs/2410.24219), [Page](https://github.com/PR-Ryan/DEMO)]\n\n[arxiv 2024.11] Optical-Flow Guided Prompt Optimization for Coherent Video Generation [[PDF](https://arxiv.org/abs/2411.15540),[Page](https://motionprompt.github.io/)] ![Code](https://img.shields.io/github/stars/HyelinNAM/MotionPrompt?style=social&label=Star)\n\n[arxiv 2024.11] Free2Guide: Gradient-Free Path Integral Control for Enhancing Text-to-Video Generation with Large Vision-Language Models [[PDF](https://arxiv.org/abs/2411.17041),[Page](https://kjm981995.github.io/free2guide/)] \n\n[arxiv 2024.12] PhyT2V: LLM-Guided Iterative Self-Refinement for Physics-Grounded Text-to-Video Generation [[PDF](https://arxiv.org/abs/2412.00596),[Page](https://github.com/pittisl/PhyT2V)] ![Code](https://img.shields.io/github/stars/pittisl/PhyT2V?style=social&label=Star)\n\n[arxiv 2024.12] Mimir: Improving Video Diffusion Models for Precise Text Understanding [[PDF](https://arxiv.org/abs/2412.03085),[Page](https://lucaria-academy.github.io/Mimir/)] \n\n[arxiv 2024.12] Track4Gen: Teaching Video Diffusion Models to Track Points Improves Video Generation [[PDF](https://arxiv.org/abs/2412.06016),[Page](https://aaltoml.github.io/BayesVLM/)] \n\n[arxiv 2024.12] STIV: Scalable Text and Image Conditioned Video Generation [[PDF](https://arxiv.org/abs/2412.07730)]\n\n[arxiv 2024.12] VideoDPO: Omni-Preference Alignment for Video Diffusion Generation [[PDF](https://arxiv.org/abs/2412.14167),[Page](https://videodpo.github.io/)] \n\n[arxiv 2025.01] RepVideo: Rethinking Cross-Layer Representation for Video Generation [[PDF](https://arxiv.org/abs/2501.08994),[Page](https://vchitect.github.io/RepVid-Webpage/)] ![Code](https://img.shields.io/github/stars/Vchitect/RepVideo?style=social&label=Star)\n\n[arxiv 2025.02] VideoJAM: Joint Appearance-Motion Representations for Enhanced Motion Generation in Video Models [[PDF](https://arxiv.org/abs/2502.02492),[Page](https://hila-chefer.github.io/videojam-paper.github.io/)] \n\n[arxiv 2025.02]  Inference-Time Text-to-Video Alignment with Diffusion Latent Beam Search [[PDF](https://arxiv.org/abs/2501.19252),[Page](https://sites.google.com/view/t2v-dlbs)] \n\n[arxiv 2025.02]  History-Guided Video Diffusion [[PDF](https://arxiv.org/abs/2502.06764),[Page](https://boyuan.space/history-guidance)] \n\n[arxiv 2025.02] Enhance-A-Video: Better Generated Video for Free  [[PDF](https://arxiv.org/pdf/2502.07508),[Page](https://github.com/NUS-HPC-AI-Lab/Enhance-A-Video)] ![Code](https://img.shields.io/github/stars/NUS-HPC-AI-Lab/Enhance-A-Video?style=social&label=Star)\n\n[arxiv 2025.03]  Raccoon: Multi-stage Diffusion Training with Coarse-to-Fine Curating Videos [[PDF](https://arxiv.org/pdf/2502.21314)]\n\n[arxiv 2025.03] The Best of Both Worlds: Integrating Language Models and Diffusion Models for Video Generation  [[PDF](https://arxiv.org/pdf/2503.04606),[Page](https://landiff.github.io/)] \n\n[arxiv 2025.03] MagicComp: Training-free Dual-Phase Refinement for Compositional Video Generation  [[PDF](https://arxiv.org/pdf/2503.14428),[Page](https://hong-yu-zhang.github.io/MagicComp-Page/)] ![Code](https://img.shields.io/github/stars/Hong-yu-Zhang/MagicComp?style=social&label=Star)\n\n[arxiv 2025.03]  Temporal Regularization Makes Your Video Generator Stronger [[PDF](https://arxiv.org/abs/2412.02114),[Page](https://haroldchen19.github.io/FluxFlow/)] \n\n[arxiv 2025.04] Towards Physically Plausible Video Generation via VLM Planning  [[PDF](https://arxiv.org/abs/2503.23368)]\n\n[arxiv 2025.04] FreSca: Unveiling the Scaling Space in Diffusion Models  [[PDF](https://arxiv.org/abs/2504.02154),[Page](https://wikichao.github.io/FreSca/)] ![Code](https://img.shields.io/github/stars/WikiChao/FreSca?style=social&label=Star)\n\n[arxiv 2025.04] Discriminator-Free Direct Preference Optimization for Video Diffusion  [[PDF](https://arxiv.org/abs/2504.08542)]\n\n[arxiv 2025.04] The Devil is in the Prompts: Retrieval-Augmented Prompt Optimization for Text-to-Video Generation  [[PDF](https://arxiv.org/abs/2504.11739),[Page](https://whynothaha.github.io/Prompt_optimizer/RAPO.html)] \n\n[arxiv 2025.04]  EquiVDM: Equivariant Video Diffusion Models with Temporally Consistent Noise [[PDF](https://arxiv.org/pdf/2504.09789),[Page](https://research.nvidia.com/labs/genair/equivdm/)] \n\n[arxiv 2025.05] Self-Rewarding Large Vision-Language Models for Optimizing Prompts in Text-to-Image Generation  [[PDF](https://arxiv.org/abs/2505.16763)]\n\n[arxiv 2025.05] Model Already Knows the Best Noise: Bayesian Active Noise Selection via Attention in Video Diffusion Model  [[PDF](https://arxiv.org/abs/2505.17561),[Page](https://anse-project.github.io/anse-project/)] \n\n[arxiv 2025.06]  Cross-Frame Representation Alignment for Fine-Tuning Video Diffusion Models [[PDF](https://arxiv.org/pdf/2506.09229),[Page](https://crepavideo.github.io/)] ![Code](https://img.shields.io/github/stars/deepshwang/crepa?style=social&label=Star)\n\n[arxiv 2025.06]  Emergent Temporal Correspondences from Video Diffusion Transformers [[PDF](https://arxiv.org/abs/2506.17220),[Page](https://arxiv.org/cvlab-kaist.github.io/DiffTrack)] ![Code](https://img.shields.io/github/stars/cvlab-kaist/DiffTrack?style=social&label=Star)\n\n[arxiv 2025.07]  Encapsulated Composition of Text-to-Image and Text-to-Video Models for High-Quality Video Synthesis [[PDF](https://arxiv.org/pdf/2507.13753),[Page](https://github.com/Tonniia/EVS)] ![Code](https://img.shields.io/github/stars/Tonniia/EVS?style=social&label=Star)\n\n[arxiv 2025.09]  PhysCtrl: Generative Physics for Controllable and Physics-Grounded Video Generation [[PDF](https://arxiv.org/abs/2509.20358),[Page](https://cwchenwang.github.io/physctrl/)] \n\n[arxiv 2025.09]  NewtonGen: Physics-Consistent and Controllable Text-to-Video Generation via Neural Newtonian Dynamics [[PDF](https://arxiv.org/abs/2509.21309),[Page](https://github.com/pandayuanyu/NewtonGen)] ![Code](https://img.shields.io/github/stars/pandayuanyu/NewtonGen?style=social&label=Star)\n\n[arxiv 2025.10] Inferring Dynamic Physical Properties from Video Foundation Models  [[PDF](https://arxiv.org/abs/2510.02311)]\n\n[arxiv 2025.10] Learning to Generate Object Interactions with Physics-Guided Video Diffusion  [[PDF](https://arxiv.org/abs/2510.02284)]\n\n[arxiv 2025.10] Epipolar Geometry Improves Video Generation Models  [[PDF](https://arxiv.org/abs/2510.21615),[Page](https://epipolar-dpo.github.io/)] ![Code](https://img.shields.io/github/stars/KupynOrest/epipolar-dpo?style=social&label=Star)\n\n[arxiv 2025.11] Plan-X: Instruct Video Generation via Semantic Planning  [[PDF](https://arxiv.org/abs/2511.17986),[Page](https://byteaigc.github.io/Plan-X/)] \n\n[arxiv 2025.12]  GeoVideo: Introducing Geometric Regularization into Video Generation Model [[PDF](https://arxiv.org/pdf/2512.03453),[Page](https://geovideo.github.io/GeoVideo/)]\n\n[arxiv 2025.12]  DiffusionBrowser: Interactive Diffusion Previews via Multi-Branch Decoders [[PDF](https://arxiv.org/abs/2512.13690)]\n\n[arxiv 2026.01]  Motion Attribution for Video Generation [[PDF](https://arxiv.org/abs/2601.08828),[Page](https://research.nvidia.com/labs/sil/projects/MOTIVE/)] \n\n[arxiv 2026.01] Inference-time Physics Alignment of Video Generative Models with Latent World Models  [[PDF](https://arxiv.org/pdf/2601.10553)]\n\n[arxiv 2026.01] VideoGPA: Distilling Geometry Priors for 3D-Consistent Video Generation  [[PDF](https://arxiv.org/abs/2601.23286),[Page](https://hongyang-du.github.io/VideoGPA-Website/)] ![Code](https://img.shields.io/github/stars/Hongyang-Du/VideoGPA?style=social&label=Star)\n\n[arxiv 2026.01]  ConsID-Gen: View-Consistent and Identity-Preserving Image-to-Video Generation [[PDF](https://arxiv.org/abs/2602.10113),[Page](https://myangwu.github.io/ConsID-Gen)] \n\n[arxiv 2026.02] SPATIALALIGN: Aligning Dynamic Spatial Relationships in Video Generation  [[PDF](https://arxiv.org/abs/2602.22745),[Page](https://fengming001ntu.github.io/SpatialAlign/)] ![Code](https://img.shields.io/github/stars/fengming001ntu/SpatialAlign?style=social&label=Star)\n\n[arxiv 2026.03] DreamWorld: Unified World Modeling in Video Generation  [[PDF](https://arxiv.org/abs/2603.00466),[Page](https://github.com/ABU121111/DreamWorld)] ![Code](https://img.shields.io/github/stars/ABU121111/DreamWorld?style=social&label=Star)\n\n[arxiv 2026.03]  Physical Simulator In-the-Loop Video Generation [[PDF](https://arxiv.org/pdf/2603.06408),[Page](https://vcai.mpi-inf.mpg.de/projects/PSIVG/)] \n\n[arxiv 2026.03] Chain of Event-Centric Causal Thought for Physically Plausible Video Generation  [[PDF](https://arxiv.org/pdf/2603.09094)]\n\n[arxiv 2026.03] PhysAlign: Physics-Coherent Image-to-Video Generation through Feature and 3D Representation Alignment  [[PDF](https://arxiv.org/abs/2603.13770),[Page](https://physalign.github.io/PhysAlign)]\n\n[arxiv 2026.03] PhysVideo: Physically Plausible Video Generation with Cross-View Geometry Guidance  [[PDF](https://arxiv.org/abs/2603.18639),[Page](https://anonymous.4open.science/w/Phys4D/)]\n\n[arxiv 2026.03] DiReCT: Disentangled Regularization of Contrastive Trajectories for Physics-Refined Video Generation  [[PDF](https://arxiv.org/abs/2603.25931)]\n\n[arxiv 2026.03]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)\n\n\n## loss \n[arxiv 2025.04]  REPA-E: Unlocking VAE for End-to-End Tuning with Latent Diffusion Transformers [[PDF](https://arxiv.org/abs/2504.10483),[Page](https://end2end-diffusion.github.io/)] ![Code](https://img.shields.io/github/stars/End2End-Diffusion/REPA-E?style=social&label=Star)\n\n\n[arxiv 2026.03]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)\n\n\n\n\n\n## composition \n[arxiv 2024.07]VideoTetris: Towards Compositional Text-To-Video Generation[[PDF](https://arxiv.org/abs/2406.04277), [Page](https://videotetris.github.io/)]\n\n\n[arxiv 2024.07]GVDIFF: Grounded Text-to-Video Generation with Diffusion Models[[PDF](https://arxiv.org/abs/2407.01921)]\n\n[arxiv 2024.07]Compositional Video Generation as Flow Equalization [[PDF](https://arxiv.org/abs/2407.06182), [Page](https://adamdad.github.io/vico/)]\n\n[arxiv 2024.07] InVi: Object Insertion In Videos Using Off-the-Shelf Diffusion Models[[PDF](https://arxiv.org/abs/2407.10958)]\n\n[arxiv 2025.01] VideoAnydoor: High-fidelity Video Object Insertion with Precise Motion Control [[PDF](https://arxiv.org/abs/2501.01427),[Page](https://videoanydoor.github.io/)] \n\n[arxiv 2025.01]BlobGEN-Vid: Compositional Text-to-Video Generation with Blob Video Representations  [[PDF](https://arxiv.org/abs/2501.07647),[Page](https://blobgen-vid2.github.io/)] \n\n[arxiv 2025.02]  DC-ControlNet: Decoupling Inter- and Intra-Element Conditions in Image Generation with Diffusion Models [[PDF](https://arxiv.org/abs/2502.14779),[Page](https://um-lab.github.io/DC-ControlNet/)] \n\n[arxiv 2025.03]  Get In Video: Add Anything You Want to the Video [[PDF](https://arxiv.org/abs/2503.06268),[Page](https://zhuangshaobin.github.io/GetInVideo-project/)] ![Code](https://img.shields.io/github/stars/xh9998/DiffVSR?style=social&label=Star)\n\n[arxiv 2025.03] DreamInsert: Zero-Shot Image-to-Video Object Insertion from A Single Image  [[PDF](https://arxiv.org/pdf/2503.10342)]\n\n[arxiv 2025.04]  VIP: Video Inpainting Pipeline for Real World Human Removal [[PDF](https://arxiv.org/abs/2504.03041)]\n\n[arxiv 2025.04] DyST-XL: Dynamic Layout Planning and Content Control for Compositional Text-to-Video Generation  [[PDF](https://arxiv.org/abs/2504.15032)]\n\n[arxiv 2025.08]  AnimateScene: Camera-controllable Animation in Any Scene [[PDF](https://arxiv.org/pdf/2508.05982)]\n\n[arxiv 2025.09] GenCompositor: Generative Video Compositing with Diffusion Transformer  [[PDF](https://arxiv.org/abs/2509.02460),[Page](https://gencompositor.github.io/)] ![Code](https://img.shields.io/github/stars/TencentARC/GenCompositor?style=social&label=Star)\n\n[arxiv 2025.10]  CoMo: Compositional Motion Customization for Text-to-Video Generation [[PDF](https://arxiv.org/abs/2510.23007),[Page](https://como6.github.io/)] \n\n[arxiv 2025.11] RISE-T2V: Rephrasing and Injecting Semantics with LLM for Expansive Text-to-Video Generation  [[PDF](https://arxiv.org/pdf/2511.04317),[Page](https://rise-t2v.github.io/)]\n\n[arxiv 2025.12]  InstanceV: Instance-Level Video Generation [[PDF](https://aliothchen.github.io/projects/InstanceV/),[Page](https://aliothchen.github.io/projects/InstanceV/)] \n\n[arxiv 2026.01]  PhyRPR: Training-Free Physics-Constrained Video Generation [[PDF](https://arxiv.org/abs/2601.09255)]\n\n[arxiv 2026.01]FAIRT2V: Training-Free Debiasing for Text-to-Video Diffusion Models   [[PDF](https://arxiv.org/abs/2601.20791)]\n\n[arxiv 2026.03] Training-free Motion Factorization for Compositional Video Generation  [[PDF](https://arxiv.org/pdf/2603.09104)】\n\n[arxiv 2026.03] Tri-Prompting: Video Diffusion with Unified Control over Scene, Subject, and Motion  [[PDF](https://arxiv.org/abs/2603.15614),[Page](https://zhouzhenghong-gt.github.io/Tri-Prompting-Page/)]\n\n\n[arxiv 2026.03]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)\n\n\n\n## Caption\n[arxiv 2024.11] Grounded Video Caption Generation [[PDF](https://arxiv.org/abs/2411.07584)]\n\n[arxiv 2024.12] Progress-Aware Video Frame Captioning [[PDF](https://arxiv.org/abs/2412.02071),[Page](https://vision.cs.utexas.edu/projects/ProgressCaptioner/)] \n\n[arxiv 2024.12] Mimir: Improving Video Diffusion Models for Precise Text Understanding [[PDF](https://arxiv.org/abs/2412.03085),[Page](https://lucaria-academy.github.io/Mimir/)] \n\n[arxiv 2024.12] InstanceCap: Improving Text-to-Video Generation via Instance-aware Structured Caption [[PDF](https://arxiv.org/abs/2412.09283),[Page](https://github.com/NJU-PCALab/InstanceCap)] ![Code](https://img.shields.io/github/stars/NJU-PCALab/InstanceCap?style=social&label=Star)\n\n[arxiv 2025.03] Cockatiel: Ensembling Synthetic and Human Preferenced Training for Detailed Video Caption  [[PDF](https://sais-fuxi.github.io/projects/cockatiel),[Page](https://sais-fuxi.github.io/projects/cockatiel/)] ![Code](https://img.shields.io/github/stars/Fr0zenCrane/Cockatiel?style=social&label=Star)\n\n[arxiv 2025.04]  Any2Caption:Interpreting Any Condition to Caption for Controllable Video Generation [[PDF](https://arxiv.org/abs/2503.24379),[Page](https://sqwu.top/Any2Cap/)] ![Code](https://img.shields.io/github/stars/ChocoWu/Any2Caption?style=social&label=Star)\n\n[arxiv 2025.06] Evaluating Multimodal Large Language Models on Video Captioning via Monte Carlo Tree Search  [[PDF](https://arxiv.org/abs/2506.11155),[Page](https://github.com/tjunlp-lab/MCTS-VCB)] ![Code](https://img.shields.io/github/stars/tjunlp-lab/MCTS-VCB?style=social&label=Star)\n\n[arxiv 2025.10]  Omni-Captioner: Data Pipeline, Models, and Benchmark for Omni Detailed Perception [[PDF](https://arxiv.org/abs/2510.12720),[Page](https://github.com/ddlBoJack/Omni-Captioner)] ![Code](https://img.shields.io/github/stars/ddlBoJack/Omni-Captioner?style=social&label=Star)\n\n[arxiv 2025.10] IF-VidCap: Can Video Caption Models Follow Instructions?  [[PDF](https://arxiv.org/abs/2510.18726),[Page](https://github.com/NJU-LINK/IF-VidCap)] ![Code](https://img.shields.io/github/stars/NJU-LINK/IF-VidCap?style=social&label=Star)\n\n[arxiv 2025.10] VC4VG: Optimizing Video Captions for Text-to-Video Generation  [[PDF](https://arxiv.org/abs/2510.24134),[Page](https://github.com/alimama-creative/VC4VG)] ![Code](https://img.shields.io/github/stars/qyr0403/VC4VG?style=social&label=Star)\n\n[arxiv 2025.10] More than a Moment: Towards Coherent Sequences of Audio Descriptions  [[PDF](https://arxiv.org/abs/2510.25440)]\n\n[arxiv 2025.10]  Towards Fine-Grained Human Motion Video Captioning [[PDF](https://arxiv.org/abs/2510.24767)]\n\n[arxiv 2025.11]  VDC-Agent: When Video Detailed Captioners Evolve Themselves via Agentic Self-Reflection [[PDF](https://arxiv.org/abs/2511.19436)]\n\n[arxiv 2025.12] Taming Hallucinations: Boosting MLLMs’ Video Understanding via Counterfactual Video Generation  [[PDF](https://arxiv.org/abs/xxxxxx),[Page](https://amap-ml.github.io/Taming-Hallucinations/)] ![Code](https://img.shields.io/github/stars/AMAP-ML/Taming-Hallucinations?style=social&label=Star)\n\n[arxiv 2026.02]  TimeChat-Captioner: Scripting Multi-Scene Videos with Time-Aware and Structural Audio-Visual Captions [[PDF](https://arxiv.org/pdf/2602.08711),[Page](https://github.com/yaolinli/TimeChat-Captioner)] ![Code](https://img.shields.io/github/stars/yaolinli/TimeChat-Captioner?style=social&label=Star)\n\n[arxiv 2026.03] VQQA: An Agentic Approach for Video Evaluation and Quality Improvement  [[PDF](https://arxiv.org/abs/2603.12310),[Page](https://yiwen-song.github.io/vqqa/)]\n\n[arxiv 2026.03] Stay in your Lane: Role Specific Queries with Overlap Suppression Loss for Dense Video Captioning  [[PDF](https://arxiv.org/abs/2603.11439)]\n\n[arxiv 2026.03] HumanOmni-Speaker: Identifying Who said What and When  [[PDF](https://arxiv.org/abs/2603.21664)]\n\n\n[arxiv 2026.03]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)\n\n\n## multi-concept\n[arxiv 2025.12]  Composing Concepts from Images and Videos via Concept-prompt Binding [[PDF](https://arxiv.org/abs/2512.09824),[Page](https://refkxh.github.io/BiCo_Webpage/)] ![Code](https://img.shields.io/github/stars/refkxh/bico?style=social&label=Star)\n\n\n[arxiv 2026.03]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)\n\n## multi-shot\n[arxiv 2025.06] AnimeShooter: A Multi-Shot Animation Dataset for Reference-Guided Video Generation  [[PDF](https://arxiv.org/abs/2506.03126),[Page](https://qiulu66.github.io/animeshooter/)] ![Code](https://img.shields.io/github/stars/qiulu66/Anime-Shooter?style=social&label=Star)\n\n[arxiv 2025.10]  MoGA: Mixture-of-Groups Attention for End-to-End Long Video Generation [[PDF](https://arxiv.org/abs/2510.18692),[Page](https://jiawn-creator.github.io/mixture-of-groups-attention/)] ![Code](https://img.shields.io/github/stars/bytedance-fanqie-ai/MoGA?style=social&label=Star)\n\n[arxiv 2025.11] Video-as-Answer: Predict and Generate Next Video Event with Joint-GRPO  [[PDF](https://arxiv.org/abs/2511.16669),[Page](https://video-as-answer.github.io/)] ![Code](https://img.shields.io/github/stars/KlingTeam/VANS?style=social&label=Star)\n\n[arxiv 2025.12] FilmWeaver: Weaving Consistent Multi-Shot Videos with Cache-Guided Autoregressive Diffusion  [[PDF](https://arxiv.org/abs/2512.11274),[Page](https://filmweaver.github.io/)] \n\n[arxiv 2025.12] StoryMem: Multi-shot Long Video Storytelling with Memory  [[PDF](https://arxiv.org/abs/2512.19539),[Page](https://kevin-thu.github.io/StoryMem/)] ![Code](https://img.shields.io/github/stars/Kevin-thu/StoryMem?style=social&label=Star)\n\n[arxiv 2025.12] DreaMontage: Arbitrary Frame-Guided One-Shot Video Generation  [[PDF](https://arxiv.org/pdf/2512.21252),[Page](https://dreamontage.github.io/DreaMontage/)] \n\n[arxiv 2026.01]  VideoMemory: Toward Consistent Video Generation via Memory Integration [[PDF](https://arxiv.org/pdf/2601.03655),[Page](https://hit-perfect.github.io/VideoMemory/)] \n\n[arxiv 2026.03] ShotVerse: Advancing Cinematic Camera Control for Text-Driven Multi-Shot Video Creation  [[PDF](https://arxiv.org/abs/2603.11421),[Page](https://shotverse.github.io/)] ![Code](https://img.shields.io/github/stars/Songlin1998/ShotVerse?style=social&label=Star)\n\n[arxiv 2026.03] ShotStream: Streaming Multi-Shot Video Generation for Interactive Storytelling  [[PDF](https://arxiv.org/abs/2603.25746),[Page](https://luo0207.github.io/ShotStream/)] ![Code](https://img.shields.io/github/stars/KlingAIResearch/ShotStream?style=social&label=Star)\n\n\n[arxiv 2026.03]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)\n\n\n## multi-prompt \n[arxiv 2023.12]MTVG : Multi-text Video Generation with Text-to-Video Models [[PDF](https://arxiv.org/abs/2312.04086)]\n\n[arxiv 2024.05]TALC: Time-Aligned Captions for Multi-Scene Text-to-Video Generation [[PDF](https://arxiv.org/abs/2405.04682),[Page](https://talc-mst2v.github.io/)]\n\n[arxiv 2024.06]VideoTetris: Towards Compositional Text-To-Video Generation[[PDF](https://arxiv.org/abs/2406.04277), [Page](https://videotetris.github.io/)]\n\n[arxiv 2024.06] Pandora: Towards General World Model with Natural Language Actions and Video States [[PDF](https://arxiv.org/abs/2406.09455), [Page](https://world-model.maitrix.org/)]\n\n[arxiv 2024.12] Mind the Time: Temporally-Controlled Multi-Event Video Generation [[PDF](https://arxiv.org/abs/2412.05263),[Page](https://mint-video.github.io/)]\n\n[arxiv 2025.02] Object-Centric Image to Video Generation with Language Guidance  [[PDF](https://arxiv.org/abs/2502.11655),[Page](https://play-slot.github.io/TextOCVP/)] ![Code](https://img.shields.io/github/stars/angelvillar96/TextOCVP?style=social&label=Star)\n\n[arxiv 2025.03]  Tuning-Free Multi-Event Long Video Generation via Synchronized Coupled Sampling [[PDF](https://arxiv.org/abs/2503.08605),[Page](https://syncos2025.github.io/)] \n\n[arxiv 2025.09] From Prompt to Progression: Taming Video Diffusion Models for Seamless Attribute Transition  [[PDF](https://arxiv.org/pdf/2509.19690),[Page](https://github.com/lynn-ling-lo/Prompt2Progression)] ![Code](https://img.shields.io/github/stars/lynn-ling-lo/Prompt2Progression?style=social&label=Star)\n\n[arxiv 2026.03]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)\n\n\n## multi-event \n[arxiv 2025.10] When and Where do Events Switch in Multi-Event Video Generation?  [[PDF](https://arxiv.org/abs/2510.03049)]\n\n[arxiv 2025.12] Active Intelligence in Video Avatars via Closed-loop World Modeling  [[PDF](https://arxiv.org/abs/2512.20615),[Page](https://xuanhuahe.github.io/ORCA/)] ![Code](https://img.shields.io/github/stars/xuanhuahe/ORCA?style=social&label=Star)\n\n[arxiv 2026.03] SwitchCraft: Training-Free Multi-Event Video Generation with Attention Controls  [[PDF](https://arxiv.org/abs/2602.23956),[Page](https://switchcraft-project.github.io/)] ![Code](https://img.shields.io/github/stars/Westlake-AGI-Lab/SwitchCraft?style=social&label=Star)\n\n[arxiv 2026.03] Event-Driven Video Generation  [[PDF](https://arxiv.org/abs/2603.13402)]\n\n\n[arxiv 2026.03]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)\n\n\n## long video generation \n[arxiv 2023.]Gen-L-Video: Long Video Generation via Temporal Co-Denoising [[PDF](https://arxiv.org/abs/2305.18264), [Page](https://g-u-n.github.io/projects/gen-long-video/index.html)]\n\n[arxiv 2023.10]FreeNoise: Tuning-Free Longer Video Diffusion Via Noise Rescheduling [[PDF](https://arxiv.org/abs/2310.15169),[Page](http://haonanqiu.com/projects/FreeNoise.html)]\n\n[arxiv 2023.12]VIDiff: Translating Videos via Multi-Modal Instructions with Diffusion Models[[PDF](https://arxiv.org/abs/2311.18837),[Page](https://chenhsing.github.io/VIDiff)]\n\n[arxiv 2023.12]AVID: Any-Length Video Inpainting with Diffusion Model [[PDF](https://arxiv.org/abs/2312.03816),[Page](https://zhang-zx.github.io/AVID/)]\n\n[arxiv 2023.12]RealCraft: Attention Control as A Solution for Zero-shot Long Video Editing [[PDF](https://arxiv.org/abs/2312.12635)]\n\n[arxiv 2024.03]VSTAR: Generative Temporal Nursing for Longer Dynamic Video Synthesis [[PDF](https://arxiv.org/pdf/2403.13501.pdf),[Page](https://yumengli007.github.io/VSTAR/)]\n\n[arxiv 2024.03]StreamingT2V: Consistent, Dynamic, and Extendable Long Video Generation from Text [[PDF](https://arxiv.org/abs/2403.14773)]\n\n[arxiv 2024.04]FlexiFilm: Long Video Generation with Flexible Conditions [[PDF](https://arxiv.org/abs/2404.18620)]\n\n[arxiv 2024.05] FIFO-Diffusion: Generating Infinite Videos from Text without Training [[PDF](https://arxiv.org/abs/2405.11473),[Page](https://jjihwan.github.io/projects/FIFO-Diffusion)]\n\n[arxiv 2024.05]Controllable Long Image Animation with Diffusion Models[[PDF](https://arxiv.org/pdf/2405.17306),[Page](https://wangqiang9.github.io/Controllable.github.io/)]\n\n[arxiv 2024.06]CoNo: Consistency Noise Injection for Tuning-free Long Video Diffusion [[PDF](https://arxiv.org/abs/2406.05082), [Page](https://wxrui182.github.io/CoNo.github.io/)]\n\n[arxiv 2024.06]Video-Infinity: Distributed Long Video Generation [[PDF](https://arxiv.org/abs/2406.16260), [Page](https://video-infinity.tanzhenxiong.com/)]\n\n[arxiv 2024.06] FreeLong : Training-Free Long Video Generation with SpectralBlend Temporal Attention [[PDF](https://arxiv.org/pdf/2407.19918), [Page](https://freelongvideo.github.io/)]\n\n[arxiv 2024.06] ViD-GPT: Introducing GPT-style Autoregressive Generation in Video Diffusion Models [[PDF](https://arxiv.org/pdf/2406.10981),[Page](https://github.com/Dawn-LX/CausalCache-VDM)] ![Code](https://img.shields.io/github/stars/Dawn-LX/CausalCache-VDM?style=social&label=Star)\n\n[arxiv 2024.06] Live2Diff: Live Stream Translation via Uni-directional Attention in Video Diffusion Models [[PDF](https://arxiv.org/abs/2407.08701),[Page](https://live2diff.github.io/)] ![Code](https://img.shields.io/github/stars/open-mmlab/Live2Diff?style=social&label=Star)\n\n\n[arxiv 2024.07]Multi-sentence Video Grounding for Long Video Generation[[PDF](https://arxiv.org/abs/2407.13219)]\n\n[arxiv 2024.08]Training-free High-quality Video Generation with Chain of Diffusion Model Experts [[PDF](https://arxiv.org/abs/2408.13423), [Page](https://confiner2025.github.io/)]\n\n[arxiv 2024.08] TVG: A Training-free Transition Video Generation Method with Diffusion Models[[PDF](https://arxiv.org/abs/2408.13413), [Page](https://sobeymil.github.io/tvg.com/)]\n\n[arxiv 2024.09] DiVE: DiT-based Video Generation with Enhanced Control [[PDF](https://arxiv.org/abs/2409.01595), [Page](https://liautoad.github.io/DIVE/)]\n\n[arxiv 2024.10] Progressive Autoregressive Video Diffusion Models [[PDF](https://arxiv.org/abs/2410.08151), [Page](https://desaixie.github.io/pa-vdm/)]\n\n[arxiv 2024.10] ARLON: Boosting Diffusion Transformers with Autoregressive Models for Long Video Generation [[PDF](https://arxiv.org/abs/2410.20502), [Page](https://arlont2v.github.io/)]\n\n[arxiv 2024.12] Long Video Diffusion Generation with Segmented Cross-Attention and Content-Rich Video Data Curation [[PDF](https://arxiv.org/abs/2412.01316),[Page](https://presto-video.github.io/)] ![Code](https://img.shields.io/github/stars/rhymes-ai/Allegro?style=social&label=Star)\n\n[arxiv 2024.12] Advancing Auto-Regressive Continuation for Video Frames [[PDF](https://arxiv.org/pdf/2412.03758)]\n\n[arxiv 2024.12] From Slow Bidirectional to Fast Causal Video Generators  [[PDF](https://arxiv.org/abs/2412.07772),[Page](https://causvid.github.io/)]\n\n[arxiv 2024.12] Owl-1: Omni World Model for Consistent Long Video Generation [[PDF](https://arxiv.org/abs/2412.09600),[Page](https://github.com/huang-yh/Owl)] ![Code](https://img.shields.io/github/stars/huang-yh/Owl?style=social&label=Star)\n\n[arxiv 2024.12] DiTCtrl: Exploring Attention Control in Multi-Modal Diffusion Transformer for Tuning-Free Multi-Prompt Longer Video Generation [[PDF](https://arxiv.org/abs/2412.18597),[Page](https://onevfall.github.io/project_page/ditctrl/)] ![Code](https://img.shields.io/github/stars/TencentARC/DiTCtrl?style=social&label=Star)\n\n[arxiv 2025.01] Tuning-Free Long Video Generation via Global-Local Collaborative Diffusion [[PDF](https://arxiv.org/pdf/2501.05484)]\n\n[arxiv 2025.01] Ouroboros-Diffusion: Exploring Consistent Content Generation in Tuning-free Long Video Diffusion [[PDF](https://arxiv.org/abs/2501.09019)]\n\n[arxiv 2025.02] MaskFlow: Discrete Flows for Flexible and Efficient Long Video Generation  [[PDF](https://arxiv.org/abs/2502.11234),[Page](https://compvis.github.io/maskflow/)] ![Code](https://img.shields.io/github/stars/CompVis/maskflow?style=social&label=Star)\n\n[arxiv 2025.02] MALT Diffusion: Memory-Augmented Latent Transformers for Any-Length Video Generation  [[PDF](https://arxiv.org/pdf/2502.12632)]\n\n[arxiv 2025.03] Training-free and Adaptive Sparse Attention for Efficient Long Video Generation  [[PDF](https://arxiv.org/abs/2502.21079)]\n\n[arxiv 2025.03]  VideoMerge: Towards Training-free Long Video Generation [[PDF](https://arxiv.org/abs/2503.09926)]\n\n[arxiv 2025.04] One-Minute Video Generation with Test-Time Training  [[PDF](https://arxiv.org/abs/2504.05298),[Page](https://test-time-training.github.io/video-dit/)] ![Code](https://img.shields.io/github/stars/test-time-training/ttt-video-dit?style=social&label=Star)\n\n[arxiv 2025.04]  SkyReels-V2: Infinite-length Film Generative Model [[PDF](https://arxiv.org/abs/2504.13074),[Page](https://github.com/SkyworkAI/SkyReels-V2)] ![Code](https://img.shields.io/github/stars/SkyworkAI/SkyReels-V2?style=social&label=Star)\n\n[arxiv 2025.04] FreePCA: Integrating Consistency Information across Long-short Frames in Training-free Long Video Generation via Principal Component Analysis  [[PDF](https://arxiv.org/abs/2505.01172),[Page](https://github.com/JosephTiTan/FreePCA)] ![Code](https://img.shields.io/github/stars/JosephTiTan/FreePCA?style=social&label=Star)\n\n[arxiv 2025.05]  InfLVG: Reinforce Inference-Time Consistent Long Video Generation with GRPO [[PDF](https://arxiv.org/abs/2505.17574),[Page](https://github.com/MAPLE-AIGC/InfLVG)] ![Code](https://img.shields.io/github/stars/MAPLE-AIGC/InfLVG?style=social&label=Star)\n\n[arxiv 2025.06] LumosFlow: Motion-Guided Long Video Generation  [[PDF](https://arxiv.org/abs/2506.02497),[Page](https://jiahaochen1.github.io/LumosFlow/)]\n\n[arxiv 2025.06] Self Forcing: Bridging the Train-Test Gap in Autoregressive Video Diffusion  [[PDF](https://arxiv.org/pdf/2506.08009),[Page](https://self-forcing.github.io/)] ![Code](https://img.shields.io/github/stars/guandeh17/Self-Forcing?style=social&label=Star)\n\n[arxiv 2025.06] Radial Attention: O(nlogn) Sparse Attention with Energy Decay for Long Video Generation  [[PDF](https://arxiv.org/abs/2506.19852),[Page](https://hanlab.mit.edu/projects/radial-attention)] ![Code](https://img.shields.io/github/stars/mit-han-lab/radial-attention?style=social&label=Star)\n\n[arxiv 2025.07]  FreeLong++: Training-Free Long Video Generation via Multi-band SpectralFusion [[PDF](https://arxiv.org/abs/2507.00162),[Page](https://freelongvideo.github.io/)] \n\n[arxiv 2025.07]  LoViC: Efficient Long Video Generation with Context Compression [[PDF](https://arxiv.org/abs/),[Page](https://jiangjiaxiu.github.io/lovic/)] \n\n[arxiv 2025.07] TokensGen: Harnessing Condensed Tokens for Long Video Generation  [[PDF](https://arxiv.org/abs/2507.15728),[Page](https://vicky0522.github.io/tokensgen-webpage/)] ![Code](https://img.shields.io/github/stars/Vicky0522/TokensGen?style=social&label=Star)\n\n[arxiv 2025.08] LongVie: Multimodal-Guided Controllable Ultra-Long Video Generation  [[PDF](https://arxiv.org/abs/2508.03694),[Page](https://vchitect.github.io/LongVie-project/)] ![Code](https://img.shields.io/github/stars/Vchitect/LongVie?style=social&label=Star)\n\n[arxiv 2025.08]  Macro-from-Micro Planning for High-Quality and Parallelized Autoregressive Long Video Generation [[PDF](https://arxiv.org/abs/2508.03334),[Page](https://nju-xunzhixiang.github.io/Anchor-Forcing-Page/)] ![Code](https://img.shields.io/github/stars/xbxsxp9/MMPL?style=social&label=Star)\n\n[arxiv 2025.08] AnchorSync: Global Consistency Optimization for Long Video Editing  [[PDF](https://arxiv.org/abs/2508.14609),[Page](https://github.com/VISION-SJTU/AnchorSync)] ![Code](https://img.shields.io/github/stars/VISION-SJTU/AnchorSync?style=social&label=Star)\n\n[arxiv 2025.08] WorldWeaver: Generating Long-Horizon Video Worlds via Rich Perception  [[PDF](https://arxiv.org/abs/2508.15720),[Page](https://johanan528.github.io/worldweaver_web/)]\n\n[arxiv 2025.08]  Mixture of Contexts for Long Video Generation [[PDF](http://arxiv.org/abs/2508.21058),[Page](https://primecai.github.io/moc/)] \n\n[arxiv 2025.09]  LongLive: Real-time Interactive Long Video Generation [[PDF](https://arxiv.org/abs/2509.22622),[Page](https://github.com/NVlabs/LongLive)] ![Code](https://img.shields.io/github/stars/NVlabs/LongLive?style=social&label=Star)\n\n[arxiv 2025.10] Self-Forcing++: Towards Minute-Scale High-Quality Video Generation  [[PDF](https://arxiv.org/abs/2510.02283),[Page](https://self-forcing-plus-plus.github.io/)] ![Code](https://img.shields.io/github/stars/justincui03/Self-Forcing-Plus-Plus?style=social&label=Star)\n\n[arxiv 2025.10]  Pack and Force Your Memory: Long-form and Consistent Video Generation [[PDF](https://arxiv.org/abs/2510.01784)]\n\n[arxiv 2025.10]  Stable Video Infinity: Infinite-Length Video Generation with Error Recycling [[PDF](https://arxiv.org/abs/2510.09212),[Page](https://stable-video-infinity.github.io/homepage/)] ![Code](https://img.shields.io/github/stars/vita-epfl/Stable-Video-Infinity?style=social&label=Star)\n\n[arxiv 2025.11] Infinity-RoPE: Action-Controllable Infinite Video Generation Emerges From Autoregressive Self-Rollout  [[PDF](https://arxiv.org/pdf/2511.20649),[Page](https://infinity-rope.github.io/)] \n\n[arxiv 2025.12]  SneakPeek: Future-Guided Instructional Streaming Video Generation [[PDF](https://arxiv.org/pdf/2512.13019)]\n\n[arxiv 2025.12] Flowing Adaptive Memory for Consistent and Efficient Long Video Narratives  [[PDF](https://arxiv.org/abs/2512.14699),[Page](https://sihuiji.github.io/MemFlow.github.io/)] ![Code](https://img.shields.io/github/stars/KlingTeam/MemFlow?style=social&label=Star)\n\n[arxiv 2025.12]  End-to-End Training for Autoregressive Video Diffusion via Self-Resampling [[PDF](https://arxiv.org/pdf/2512.15702),[Page](https://guoyww.github.io/projects/resampling-forcing/)]\n\n[arxiv 2025.12]  Memorize-and-Generate: Towards Long-Term Consistency in Real-Time Video Generation [[PDF](https://arxiv.org/pdf/2512.18741)]\n\n[arxiv 2025.12] BlockVid: Block Diffusion for High-Quality and Consistent Minute-Long Video Generation  [[PDF](https://arxiv.org/abs/2511.22973),[Page](https://ziplab.co/BlockVid/)] \n\n[arxiv 2026.01]  Reward-Forcing: Autoregressive Video Generation with Reward Feedback [[PDF](https://arxiv.org/abs/2601.16933)]\n\n[arxiv 2026.01]  LoL: Longer than Longer, Scaling Video Generation to Hour [[PDF](https://arxiv.org/abs/2601.16914)]\n\n[arxiv 2026.01]  Entropy-Guided k-Guard Sampling for Long-Horizon Autoregressive Video Generation [[PDF](https://arxiv.org/abs/2601.19488),[Page](https://greanguy.github.io/ENkG/)] \n\n[arxiv 2026.01]  Context Forcing: Consistent Autoregressive Video Generation with Long Context [[PDF](https://arxiv.org/abs/2602.06028),[Page](https://chenshuo20.github.io/Context_Forcing/)] ![Code](https://img.shields.io/github/stars/TIGER-AI-Lab/Context-Forcing?style=social&label=Star)\n\n[arxiv 2026.01]  Pathwise Test-Time Correction for Autoregressive Long Video Generation [[PDF](https://arxiv.org/pdf/2602.05871)]\n\n[arxiv 2026.02] Rolling Sink: Bridging Limited-Horizon Training and Open-Ended Testing in Autoregressive Video Diffusion  [[PDF](https://arxiv.org/abs/2602.07775),[Page](https://rolling-sink.github.io/)] ![Code](https://img.shields.io/github/stars/haodong2000/Rolling-Sink?style=social&label=Star)\n\n[arxiv 2026.02] Train Short, Inference Long: Training-free Horizon Extension for Autoregressive Video Generation  [[PDF](https://arxiv.org/abs/2602.14027),[Page](https://ga-lee.github.io/FLEX_demo/)] ![Code](https://img.shields.io/github/stars/Ga-Lee/Frequency-aware-Length-EXtension?style=social&label=Star)\n\n[arxiv 2026.03] Mode Seeking meets Mean Seeking for Fast Long Video Generation  [[PDF](https://arxiv.org/abs/2602.24289),[Page](https://primecai.github.io/mmm/)] \n\n[arxiv 2026.03]  Helios: Real Real-Time Long Video Generation Model [[PDF](https://arxiv.org/abs/2603.04379),[Page](https://pku-yuangroup.github.io/Helios-Page/)] ![Code](https://img.shields.io/github/stars/PKU-YuanGroup/Helios?style=social&label=Star)\n\n[arxiv 2026.03] HiAR: Efficient Autoregressive Long Video Generation via Hierarchical Denoising  [[PDF](https://arxiv.org/abs/2603.08703),[Page](https://jacky-hate.github.io/HiAR/)] ![Code](https://img.shields.io/github/stars/Jacky-hate/HiAR?style=social&label=Star)\n\n[arxiv 2026.03] Relax Forcing: Relaxed KV-Memory for Consistent Long Video Generation  [[PDF](https://arxiv.org/abs/2603.21366),[Page](https://zengqunzhao.github.io/Relax-Forcing)] ![Code](https://img.shields.io/github/stars/zengqunzhao/Relax-Forcing?style=social&label=Star)\n\n[arxiv 2026.03] PackForcing: Short Video Training Suffices for Long Video Sampling and Long Context Inference  [[PDF](https://arxiv.org/abs/2603.25730)] ![Code](https://img.shields.io/github/stars/ShandaAI/PackForcing?style=social&label=Star)\n\n[arxiv 2026.03] Free-Lunch Long Video Generation via Layer-Adaptive O.O.D Correction  [[PDF](https://arxiv.org/abs/2603.25209)] ![Code](https://img.shields.io/github/stars/Westlake-AGI-Lab/FreeLOC?style=social&label=Star)\n\n[arxiv 2026.03] DCARL: A Divide-and-Conquer Framework for Autoregressive Long-Trajectory Video Generation  [[PDF](https://arxiv.org/abs/2603.24835),[Page](https://junyiouy.github.io/projects/dcarl)]\n\n\n[arxiv 2026.03]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)\n\n\n\n## memory \n[arxiv 2025.06] VMem: Consistent Interactive Video Scene Generation with Surfel-Indexed View Memory  [[PDF](http://arxiv.org/abs/2506.18903),[Page](https://v-mem.github.io/)] ![Code](https://img.shields.io/github/stars/runjiali-rl/vmem?style=social&label=Star)\n\n[arxiv 2025.07] Ella: Embodied Social Agents with Lifelong Memory  [[PDF](https://arxiv.org/pdf/2506.24019),[Page](https://umass-embodied-agi.github.io/Ella/)] ![Code](https://img.shields.io/github/stars/UMass-Embodied-AGI/Ella?style=social&label=Star)\n\n[arxiv 2025.12]  Memorize-and-Generate: Towards Long-Term Consistency in Real-Time Video Generation [[PDF](https://arxiv.org/pdf/2512.18741)]\n\n[arxiv 2025.12]  Pretraining Frame Preservation in Autoregressive Video Memory Compression [[PDF](https://arxiv.org/pdf/2512.23851)]\n\n[arxiv 2026.03] Out of Sight but Not Out of Mind: Hybrid Memory for Dynamic Video World Models  [[PDF](https://arxiv.org/abs/2603.25716)]\n\n\n[arxiv 2026.03]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)\n\n\n## cot\n[arxiv 2025.10] VChain: Chain-of-Visual-Thought for Reasoning in Video Generation  [[PDF](https://arxiv.org/abs/2510.05094),[Page](https://eyeline-labs.github.io/VChain)] ![Code](https://img.shields.io/github/stars/Eyeline-Labs/VChain?style=social&label=Star)\n\n[arxiv 2026.03]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)\n\n\n## robot \n[arxiv 2025.06] Learning Video Generation for Robotic Manipulation with Collaborative Trajectory Control  [[PDF](https://arxiv.org/abs/2506.01943),[Page](https://fuxiao0719.github.io/projects/robomaster/)] ![Code](https://img.shields.io/github/stars/KwaiVGI/RoboMaster?style=social&label=Star)\n\n[arxiv 2026.03]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)\n\n\n## infinity scene /360\n[arxiv 2023.12]Going from Anywhere to Everywhere[[PDF](https://arxiv.org/abs/2312.03884),[Page](https://kovenyu.com/wonderjourney/)]\n\n[arxiv 2024.1]360DVD: Controllable Panorama Video Generation with 360-Degree Video Diffusion Model [[PDF](https://arxiv.org/abs/2401.06578)]\n\n## Story /  Concept \n[arxiv 2023.05]TaleCrafter: Interactive Story Visualization with Multiple Characters [[PDF](https://arxiv.org/abs/2305.18247), [Page](https://videocrafter.github.io/TaleCrafter/)]\n\n[arxiv 2023.07]Animate-A-Story: Storytelling with Retrieval-Augmented Video Generation [[PDF](https://arxiv.org/abs/2307.06940), [Page](https://videocrafter.github.io/Animate-A-Story)]\n\n[arxiv 2024.01]VideoDrafter: Content-Consistent Multi-Scene Video Generation with LLM [[PDF](https://arxiv.org/abs/2401.01256), [Page](https://videodrafter.github.io/)]\n\n[arxiv 2024.01]Vlogger: Make Your Dream A Vlog [[PDF](https://arxiv.org/abs/2401.09414),[Page](https://github.com/zhuangshaobin/Vlogger)]\n\n[arxiv 2024.03]AesopAgent: Agent-driven Evolutionary System on Story-to-Video Production [[PDF](https://arxiv.org/abs/2403.07952),[Page](https://aesopai.github.io/)]\n\n[arxiv 2024.04]StoryDiffusion: Consistent Self-Attention for Long-Range Image and Video Generation [[PDF](https://arxiv.org/abs/2405.01434),[Page](https://github.com/HVision-NKU/StoryDiffusion)]\n\n[arxiv 2024.05]The Lost Melody: Empirical Observations on Text-to-Video Generation From A Storytelling Perspective [[PDF](https://arxiv.org/abs/2405.08720)]\n\n[arxiv 2024.05]DisenStudio: Customized Multi-subject Text-to-Video Generation with Disentangled Spatial Control [[PDF](https://arxiv.org/abs/2405.12796),[Page](https://forchchch.github.io/disenstudio.github.io/)]\n\n[arxiv 2024.11] DreamRunner: Fine-Grained Storytelling Video Generation with Retrieval-Augmented Motion Adaptation [[PDF](https://arxiv.org/abs/2411.16657),[Page](https://dreamrunner-story2video.github.io/)] ![Code](https://img.shields.io/github/stars/wz0919/DreamRunner?style=social&label=Star)\n\n[arxiv 2025.01] VideoAuteur: Towards Long Narrative Video Generation [[PDF](https://arxiv.org/abs/2501.06173),[Page](https://videoauteur.github.io/)] ![Code](https://img.shields.io/github/stars/lambert-x/VideoAuteur?style=social&label=Star)\n\n[arxiv 2025.03] Text2Story: Advancing Video Storytelling with Text Guidance  [[PDF](https://arxiv.org/pdf/2503.06310)]\n\n[arxiv 2025.03] Long Context Tuning for Video Generation  [[PDF](https://arxiv.org/pdf/2503.10589),[Page](https://guoyww.github.io/projects/long-context-video/)] \n\n[arxiv 2025.04] AnimeGamer: Infinite Anime Life Simulation with Next Game State Prediction  [[PDF](https://arxiv.org/pdf/2504.01014),[Page](https://howe125.github.io/AnimeGamer.github.io/)] ![Code](https://img.shields.io/github/stars/TencentARC/AnimeGamer?style=social&label=Star)\n\n[arxiv 2025.04] One-Minute Video Generation with Test-Time Training  [[PDF](https://arxiv.org/abs/2504.05298),[Page](https://test-time-training.github.io/video-dit/)] ![Code](https://img.shields.io/github/stars/test-time-training/ttt-video-dit?style=social&label=Star)\n\n[arxiv 2025.04]  VC-LLM: Automated Advertisement Video Creation from Raw Footage using Multi-modal LLMs [[PDF](https://arxiv.org/abs/2504.05673)]\n\n[arxiv 2025.05]  ShotAdapter: Text-to-Multi-Shot Video Generation with Diffusion Models [[PDF](https://arxiv.org/abs/2505.07652),[Page](https://shotadapter.github.io/)] \n\n[arxiv 2025.06]  EchoShot: Multi-Shot Portrait Video Generation [[PDF](https://arxiv.org/pdf/2506.15838),[Page](https://johnneywang.github.io/EchoShot-webpage)] \n\n[arxiv 2025.06]  FilMaster: Bridging Cinematic Principles and Generative AI for Automated Film Generation [[PDF](https://arxiv.org/abs/2506.18899),[Page](https://filmaster-ai.github.io/)] \n\n[arxiv 2025.08] MAViS: A Multi-Agent Framework for Long-Sequence Video Storytelling  [[PDF](https://arxiv.org/abs/2508.08487)]\n\n[arxiv 2026.01]  The Script is All You Need: An Agentic Framework for Long-Horizon Dialogue-to-Cinematic Video Generation [[PDF](https://arxiv.org/abs/2601.17737),[Page](https://xd-mu.github.io/ScriptIsAllYouNeed/)] ![Code](https://img.shields.io/github/stars/Tencent/digitalhuman?style=social&label=Star)\n\n[arxiv 2026.03]  InfinityStory: Unlimited Video Generation with World Consistency and Character-Aware Shot Transitions [[PDF](https://arxiv.org/pdf/2603.03646)]\n\n[arxiv 2026.03]  COMIC: Agentic Sketch Comedy Generation [[PDF](https://arxiv.org/abs/2603.11048),[Page](https://susunghong.github.io/COMIC/)] ![Code](https://img.shields.io/github/stars/SusungHong/COMIC?style=social&label=Star)\n\n\n[arxiv 2026.03]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)\n\n## study\n[arxiv 2026.02]   Causality in Video Diffusers is Separable from Denoising[[PDF](https://arxiv.org/pdf/2602.10095)]\n\n[arxiv 2026.03]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)\n\n\n## reasoning \n\n[arxiv 2026.02] A Very Big Video Reasoning Suite  [[PDF](https://arxiv.org/abs/2602.20159),[Page](https://video-reason.com/)] \n\n[arxiv 2026.03] EndoCoT: Scaling Endogenous Chain-of-Thought Reasoning in Diffusion Models  [[PDF](https://arxiv.org/abs/2603.12252),[Page](https://lennoxdai.github.io/EndoCoT-Webpage/)]\n\n[arxiv 2026.03]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)\n\n\n## Stereo Video Generation \n\n[arxiv 2024.09]StereoCrafter: Diffusion-based Generation of Long and High-fidelity Stereoscopic 3D from Monocular Videos  [[PDF](https://arxiv.org/abs/2409.07447),[Page](https://stereocrafter.github.io/)]\n\n[arxiv 2026.03]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)\n\n\n## Controllable Video Generation: time and event \n[arxiv 2025.12]  AlcheMinT: Fine-grained Temporal Control for Multi-Reference Consistent Video Generation [[PDF](https://arxiv.org/abs/2512.10943),[Page](https://snap-research.github.io/Video-AlcheMinT/)] \n\n[arxiv 2026.03]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)\n\n\n## Controllable Video Generation \n\n*[arxiv 2023.04]Motion-Conditioned Diffusion Model for Controllable Video Synthesis [[PDF](https://arxiv.org/abs/2304.14404), [Page](https://tsaishien-chen.github.io/MCDiff/)]\n\n[arxiv 2023.06]Video Diffusion Models with Local-Global Context Guidance [[PDF](https://arxiv.org/abs/2306.02562)]\n\n[arxiv 2023.06]VideoComposer: Compositional Video Synthesis with Motion Controllability [[PDF](https://arxiv.org/abs/2306.02018)]\n\n[arxiv 2023.07]Animate-A-Story: Storytelling with Retrieval-Augmented Video Generation [[PDF](https://arxiv.org/abs/2307.06940), [Page](https://videocrafter.github.io/Animate-A-Story)]\n\n[arxiv 2023.10]MotionDirector: Motion Customization of Text-to-Video Diffusion Models [[PDF](https://arxiv.org/abs/2310.08465),[Page](https://showlab.github.io/MotionDirector/)]\n\n[arxiv 2023.11]Space-Time Diffusion Features for Zero-Shot Text-Driven Motion Transfer[[PDF](https://arxiv.org/abs/2311.17009),[Page](https://diffusion-motion-transfer.github.io/)]\n\n[arxiv 2023.11]SparseCtrl: Adding Sparse Controls to Text-to-Video Diffusion Models[[PDF](https://arxiv.org/abs/2311.16933), [Page](https://guoyww.github.io/projects/SparseCtrl)]\n\n[arxiv 2023.12]Fine-grained Controllable Video Generation via Object Appearance and Context [[PDF](https://arxiv.org/abs/2312.02919),[Page](https://hhsinping.github.io/factor)]\n\n[arxiv 2023.12]Drag-A-Video: Non-rigid Video Editing with Point-based Interaction [[PDF](https://arxiv.org/abs/2312.02936),[Page](https://drag-a-video.github.io/)]\n\n[arxiv 2023.12]Peekaboo: Interactive Video Generation via Masked-Diffusion [[PDF](https://arxiv.org/abs/2312.07509),[Page](https://jinga-lala.github.io/projects/Peekaboo/)]\n\n[arxiv 2023.12]InstructVideo: Instructing Video Diffusion Models with Human Feedback [[PDF](https://arxiv.org/abs/2312.12490),[Page](https://instructvideo.github.io/)]\n\n[arxiv 2024.01]Motion-Zero: Zero-Shot Moving Object Control Framework for Diffusion-Based Video Generation[[PDF](https://arxiv.org/abs/2401.10150)]\n\n[arxiv 2024.01]Synthesizing Moving People with 3D Control [[PDF](https://arxiv.org/abs/2401.10889),[PDF](https://boyiliee.github.io/3DHM.github.io/)]\n\n[arxiv 2024.02]Boximator: Generating Rich and Controllable Motions for Video Synthesis [[PDF](https://arxiv.org/abs/2402.01566),[Page](https://boximator.github.io/)]\n\n[arxiv 2024.02]InteractiveVideo: User-Centric Controllable Video Generation with Synergistic Multimodal Instructions [[PDF](https://arxiv.org/abs/2402.03040),[Page](https://github.com/invictus717/InteractiveVideo)]\n\n[arxiv 2024.03]Animate Your Motion: Turning Still Images into Dynamic Videos [[PDF](https://arxiv.org/abs/2403.10179),[Page](https://mingxiao-li.github.io/smcd/)]\n\n[arxiv 2024.04]Motion Inversion for Video Customization [[PDF](https://arxiv.org/abs/2403.20193),[Page](https://wileewang.github.io/MotionInversion/)]\n\n[arxiv 2023.12]Generative Rendering: Controllable 4D-Guided Video Generation with 2D Diffusion Models [[PDF](https://arxiv.org/abs/2312.01409),[Page](https://primecai.github.io/generative_rendering/)]\n\n[arxiv 2024.05]MOFA-Video: Controllable Image Animation via Generative Motion Field Adaptions in Frozen Image-to-Video Diffusion Model [[PDF](https://arxiv.org/abs/2405.20222),[Page](https://myniuuu.github.io/MOFA_Video/)]\n\n[arxiv 2024.06] FreeTraj: Tuning-Free Trajectory Control in Video Diffusion Models [[PDF](https://arxiv.org/abs/2406.16863),[Page](http://haonanqiu.com/projects/FreeTraj.html)]\n\n[arxiv 2024.06] MVOC: a training-free multiple video object composition method with diffusion models [[PDF](https://arxiv.org/abs/2406.15829),[Page](https://sobeymil.github.io/mvoc.com/)]\n\n[arxiv 2024.06] MotionBooth: Motion-Aware Customized Text-to-Video Generation [[PDF](https://arxiv.org/abs/2406.17758),[Page](https://jianzongwu.github.io/projects/motionbooth)]\n\n[CVPR 2025] Tora: Trajectory-oriented Diffusion Transformer for Video Generation  [[PDF](https://arxiv.org/abs/2407.21705),[Page](https://ali-videoai.github.io/tora_video/)] [![Code](https://img.shields.io/github/stars/alibaba/Tora?style=social&label=Star)](https://github.com/alibaba/Tora)\n\n[arxiv 2024.08] Puppet-Master: Scaling Interactive Video Generation as a Motion Prior for Part-Level Dynamics[[PDF](https://arxiv.org/abs/2408.04631),[Page](https://vgg-puppetmaster.github.io/)]\n\n[arxiv 2024.08] TrackGo: A Flexible and Efficient Method for Controllable Video Generation [[PDF](https://arxiv.org/abs/2408.11475),[Page](https://zhtjtcz.github.io/TrackGo-Page/)]\n\n[arxiv 2024.10] DragEntity: Trajectory Guided Video Generation using Entity and Positional Relationships [[PDF](https://arxiv.org/abs/2410.10751)]\n\n[arxiv 2024.10] MovieCharacter: A Tuning-Free Framework for Controllable Character Video Synthesis [[PDF](https://arxiv.org/abs/2410.20974),[Page](https://moviecharacter.github.io/)]\n\n[arxiv 2024.11] SG-I2V: Self-Guided Trajectory Control in Image-to-Video Generation [[PDF](https://arxiv.org/abs/2411.04989),[Page](https://kmcode1.github.io/Projects/SG-I2V/)]\n\n[arxiv 2024.11] Motion Control for Enhanced Complex Action Video Generation [[PDF](https://arxiv.org/abs/2411.08328),[Page](https://mvideo-v1.github.io/)]\n\n[arxiv 2024.11] OnlyFlow: Optical Flow based Motion Conditioning for Video Diffusion Models [[PDF](https://arxiv.org/abs/2411.10501)]\n\n[arxiv 2024.12] Motion Prompting: Controlling Video Generation with Motion Trajectories [[PDF](https://arxiv.org/abs/2412.02700),[Page](https://motion-prompting.github.io/)] \n\n[arxiv 2024.12] Video Motion Transfer with Diffusion Transformers [[PDF](https://arxiv.org/abs/2412.07776),[Page](https://ditflow.github.io/)] ![Code](https://img.shields.io/github/stars/ditflow/ditflow?style=social&label=Star)\n\n[arxiv 2024.12] Repurposing Pre-trained Video Diffusion Models for Event-based Video Interpolation [[PDF](https://arxiv.org/abs/2412.07761/),[Page](https://vdm-evfi.github.io/)] \n\n[arxiv 2024.12] Trajectory Attention for Fine-grained Video Motion Control [[PDF](https://arxiv.org/abs/2411.19324),[Page]()] ![Code](https://img.shields.io/github/stars/xizaoqu/TrajectoryAttntion?style=social&label=Star)\n\n[arxiv 2024.12] 3DTrajMaster: Mastering 3D Trajectory for Multi-Entity Motion in Video Generation [[PDF](https://arxiv.org/abs/2412.07759),[Page](http://fuxiao0719.github.io/projects/3dtrajmaster)] ![Code](https://img.shields.io/github/stars/KwaiVGI/3DTrajMaster?style=social&label=Star)\n\n[arxiv 2024.12] Mojito: Motion Trajectory and Intensity Control for Video Generation [[PDF](https://arxiv.org/abs/2412.08948),[Page](https://sites.google.com/view/mojito-video)] ![Code](https://img.shields.io/github/stars/jkooy/mojito?style=social&label=Star)\n\n[arxiv 2024.12] MotionBridge: Dynamic Video Inbetweening with Flexible Controls [[PDF](https://arxiv.org/pdf/2412.13190)]\n\n[arxiv 2024.12] AniDoc: Animation Creation Made Easier [[PDF](https://arxiv.org/pdf/2412.14173),[Page](https://github.com/yihao-meng/AniDoc)] ![Code](https://img.shields.io/github/stars/yihao-meng/AniDoc?style=social&label=Star)\n\n[arxiv 2024.12] LeviTor: 3D Trajectory Oriented Image-to-Video Synthesis [[PDF](https://arxiv.org/abs/2412.15214),[Page](https://ppetrichor.github.io/levitor.github.io/)] ![Code](https://img.shields.io/github/stars/qiuyu96/LeviTor?style=social&label=Star)\n\n[arxiv 2025.01] Diffusion as Shader: 3D-aware Video Diffusion for Versatile Video Generation Control [[PDF](https://arxiv.org/abs/2501.03847),[Page](https://igl-hkust.github.io/das/more_results.html#mesh-to-video)] ![Code](https://img.shields.io/github/stars/IGL-HKUST/DiffusionAsShader?style=social&label=Star)\n\n[arxiv 2025.01] Go-with-the-Flow: Motion-Controllable Video Diffusion Models Using Real-Time Warped Noise [[PDF](https://arxiv.org/abs/2501.08331),[Page](https://vgenai-netflix-eyeline-research.github.io/Go-with-the-Flow/)] ![Code](https://img.shields.io/github/stars/VGenAI-Netflix-Eyeline-Research/Go-with-the-Flow?style=social&label=Star)\n\n[arxiv 2025.01] LayerAnimate:Layer-specific Control for Animation [[PDF](https://arxiv.org/abs/2501.08295),[Page](https://layeranimate.github.io/)] ![Code](https://img.shields.io/github/stars/IamCreateAI/LayerAnimate?style=social&label=Star)\n\n[arxiv 2025.01] Training-Free Motion-Guided Video Generation with Enhanced Temporal Consistency Using Motion Consistency Loss [[PDF](https://arxiv.org/abs/2501.07563v1),[Page](https://zhangxinyu-xyz.github.io/SimulateMotion.github.io/)] \n\n[arxiv 2025.01] Separate Motion from Appearance: Customizing Motion via Customizing Text-to-Video Diffusion Models [[PDF](https://arxiv.org/abs/2501.16714)]\n\n[arxiv 2025.02] VidSketch: Hand-drawn Sketch-Driven Video Generation with Diffusion Control  [[PDF](https://arxiv.org/abs/2502.01101),[Page](https://csfufu.github.io/vid_sketch/)] ![Code](https://img.shields.io/github/stars/CSfufu/VidSketch?style=social&label=Star)\n\n[arxiv 2025.02] MotionCanvas: Cinematic Shot Design with Controllable Image-to-Video Generation  [[PDF](https://arxiv.org/abs/2502.04299),[Page](https://motion-canvas25.github.io/)] \n\n[arxiv 2025.02] MotionMatcher: Motion Customization of Text-to-Video Diffusion Models via Motion Feature Matching  [[PDF](https://arxiv.org/abs/2502.13234),[Page](https://www.csie.ntu.edu.tw/~b09902097/motionmatcher/)] ![Code](https://img.shields.io/github/stars/b09902097/motionmatcher?style=social&label=Star)\n\n[arxiv 2025.02]  C-Drag: Chain-of-Thought Driven Motion Controller for Video Generation [[PDF](https://arxiv.org/pdf/2502.19868),[Page](https://github.com/WesLee88524/C-Drag-Official-Repo)] ![Code](https://img.shields.io/github/stars/WesLee88524/C-Drag-Official-Repo?style=social&label=Star)\n\n[arxiv 2025.03]  MagicMotion: Controllable Video Generation with Dense-to-Sparse Trajectory Guidance [[PDF](https://arxiv.org/abs/2503.16421),[Page](https://quanhaol.github.io/magicmotion-site/)] ![Code](https://img.shields.io/github/stars/quanhaol/MagicMotion?style=social&label=Star)\n\n[arxiv 2025.03] PoseTraj: Pose-Aware Trajectory Control in Video Diffusion  [[PDF](https://arxiv.org/abs/2503.16068),[Page](https://robingg1.github.io/Pose-Traj/)] ![Code](https://img.shields.io/github/stars/robingg1/PoseTraj?style=social&label=Star)\n\n[arxiv 2025.03] Enabling Versatile Controls for Video Diffusion Models  [[PDF](https://pp-vctrl.github.io/),[Page](https://pp-vctrl.github.io/)] ![Code](https://img.shields.io/github/stars/PaddlePaddle/PaddleMIX/tree/develop/ppdiffusers/examples/ppvctrl?style=social&label=Star)\n\n[arxiv 2025.03] Re-HOLD: Video Hand Object Interaction Reenactment via adaptive Layout-instructed Diffusion Model  [[PDF](https://arxiv.org/abs/2503.16942),[Page](https://fyycs.github.io/Re-HOLD/)] \n\n[arxiv 2025.04]  Training-free Guidance in Text-to-Video Generation via Multimodal Planning and Structured Noise Initialization [[PDF](https://arxiv.org/abs/2504.08641),[Page](https://video-msg.github.io/)] ![Code](https://img.shields.io/github/stars/jialuli-luka/Video-MSG?style=social&label=Star)\n\n\n[arxiv 2025.04] OmniVDiff: Omni Controllable Video Diffusion for Generation and Understanding  [[PDF](https://arxiv.org/abs/2504.10825),[Page](https://tele-ai.github.io/OmniVDiff/)] ![Code](https://img.shields.io/github/stars/Tele-AI/OmniVDiff?style=social&label=Star)\n\n[arxiv 2025.05]  WonderPlay: Dynamic 3D Scene Generation from a Single Image and Actions [[PDF](https://arxiv.org/abs/2505.18151),[Page](https://kyleleey.github.io/WonderPlay/)] ![Code](https://img.shields.io/github/stars/kyleleey/WonderPlay?style=social&label=Star)\n\n[arxiv 2025.06] IllumiCraft: Unified Geometry and Illumination Diffusion for Controllable Video Generation  [[PDF](https://arxiv.org/abs/2506.03150),[Page](https://yuanze-lin.me/IllumiCraft_page)] \n\n[arxiv 2025.06] FullDiT2: Efficient In-Context Conditioning for Video Diffusion Transformers  [[PDF](https://arxiv.org/abs/2506.04213),[Page](https://fulldit2.github.io/)] \n\n[arxiv 2025.06] Follow-Your-Motion: Video Motion Transfer via Efficient Spatial-Temporal Decoupled Finetuning  [[PDF](https://arxiv.org/abs/2506.05207),[Page](https://follow-your-motion.github.io/)] \n\n[arxiv 2025.07]  SynMotion: Semantic-Visual Adaptation for Motion Customized Video Generation [[PDF](http://arxiv.org/abs/2506.23690),[Page](https://lucaria-academy.github.io/SynMotion/)] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)\n\n[arxiv 2025.07] LongAnimation: Long Animation Generation with Dynamic Global-Local Memory  [[PDF](https://arxiv.org/abs/2507.01945),[Page](https://cn-makers.github.io/long_animation_web/)] ![Code](https://img.shields.io/github/stars/CN-makers/LongAnimation?style=social&label=Star)\n\n[arxiv 2025.07]  AnyI2V: Animating Any Conditional Image with Motion Control [[PDF](https://arxiv.org/abs/2507.02857),[Page](https://henghuiding.com/AnyI2V/)] ![Code](https://img.shields.io/github/stars/FudanCVL/AnyI2V?style=social&label=Star)\n\n[arxiv 2025.07] MotionShot: Adaptive Motion Transfer across Arbitrary Objects for Text-to-Video Generation  [[PDF](https://arxiv.org/abs/2507.16310)]\n\n[arxiv 2025.08] PersonaAnimator: Personalized Motion Transfer from Unconstrained Videos  [[PDF](https://arxiv.org/abs/2508.19895)]\n\n[arxiv 2025.09] DiTraj: training-free trajectory control for video diffusion transformer  [[PDF](https://arxiv.org/pdf/2509.21839)]\n\n[arxiv 2025.10] MotionRAG: Motion Retrieval-Augmented Image-to-Video Generation  [[PDF](https://arxiv.org/abs/2509.26391)]\n\n[arxiv 2025.10]  Mask2IV: Interaction-Centric Video Generation via Mask Trajectories [[PDF](https://arxiv.org/abs/2510.03135),[Page](https://reagan1311.github.io/mask2iv/)] \n\n[arxiv 2025.10]  FlexTraj: Image-to-Video Generation with Flexible Point Trajectory Control [[PDF](https://arxiv.org/abs/2510.08527),[Page](https://bestzzhang.github.io/FlexTraj/)] !\n\n[arxiv 2025.10]  MultiCOIN: Multi-Modal COntrollable Video INbetweening [[PDF](https://arxiv.org/abs/2510.08561),[Page](https://multicoinx.github.io/multicoin/)] \n\n[arxiv 2025.10] Controllable Video Synthesis via Variational Inference  [[PDF](https://arxiv.org/abs/2510.07670),[Page](https://video-synthesis-variational.github.io/)] \n\n[arxiv 2025.10] TGT: Text-Grounded Trajectories for Locally Controlled Video Generation  [[PDF](https://arxiv.org/abs/2510.15104),[Page](https://textgroundedtraj.github.io/)] \n\n[arxiv 2025.10] Video-As-Prompt: Unified Semantic Control for Video Generation  [[PDF](https://arxiv.org/pdf/2510.20888),[Page](https://github.com/bytedance/Video-As-Prompt)] ![Code](https://img.shields.io/github/stars/bytedance/Video-As-Prompt?style=social&label=Star)\n\n[arxiv 2025.10] SAGE: Structure-Aware Generative Video Transitions between Diverse Clips  [[PDF](https://arxiv.org/abs/2510.24667),[Page](https://kan32501.github.io/sage.github.io/)]\n\n[arxiv 2025.10]  VFXMaster: Unlocking Dynamic Visual Effect Generation via In-Context Learning [[PDF](https://arxiv.org/abs/2510.25772),[Page](https://libaolu312.github.io/VFXMaster/)] ![Code](https://img.shields.io/github/stars/libaolu312/VFXMaster?style=social&label=Star)\n\n[arxiv 2025.11]  Time-to-Move: Training-Free Motion Controlled Video Generation via Dual-Clock Denoising [[PDF](https://arxiv.org/abs/2511.08633),[Page](https://time-to-move.github.io/)] ![Code](https://img.shields.io/github/stars/time-to-move/TTM?style=social&label=Star)\n\n[arxiv 2025.11]  In-Video Instructions: Visual Signals as Generative Control [[PDF](https://arxiv.org/abs/2511.19401),[Page](https://fangggf.github.io/In-Video/)] ![Code](https://img.shields.io/github/stars/VainF/In-Video-Instructions?style=social&label=Star)\n\n[arxiv 2025.12] DisMo: Disentangled Motion Representations for Open-World Motion Transfer  [[PDF](https://arxiv.org/abs/2511.23428),[Page](https://compvis.github.io/DisMo/)] ![Code](https://img.shields.io/github/stars/CompVis/DisMo?style=social&label=Star)\n\n[arxiv 2025.12]  VHOI: Controllable Video Generation of Human–Object Interactions from Sparse Trajectories via Motion Densification [[PDF](https://arxiv.org/abs/2512.09646),[Page](https://vcai.mpi-inf.mpg.de/projects/vhoi/)] \n\n[arxiv 2025.12] The World is Your Canvas: Painting Promptable Events with Reference Images, Trajectories, and Text  [[PDF](https://arxiv.org/abs/2512.16924),[Page](https://worldcanvas.github.io/)] ![Code](https://img.shields.io/github/stars/pPetrichor/WorldCanvas?style=social&label=Star)\n\n[arxiv 2026.01]  MotionAdapter: Video Motion Transfer via Content-Aware Attention Customization [[PDF](https://arxiv.org/pdf/2601.01955),[Page](https://zhexin-zhang.github.io/MotionAdapter/)] ![Code](https://img.shields.io/github/stars/Zhexin-Zhang/MotionAdapter-Code?style=social&label=Star)\n\n[arxiv 2026.01]  Moaw: Unleashing Motion Awareness for Video Diffusion Models [[PDF](https://arxiv.org/abs/2601.12761),[Page](https://github.com/tianqi-zh/Moaw)] ![Code](https://img.shields.io/github/stars/tianqi-zh/Moaw?style=social&label=Star)\n\n[arxiv 2026.01]  Olaf-World: Orienting Latent Actions for Video World Modeling [[PDF](https://arxiv.org/abs/2602.10104),[Page](https://showlab.github.io/Olaf-World/)] ![Code](https://img.shields.io/github/stars/showlab/Olaf-World?style=social&label=Star)\n\n[arxiv 2026.02] FlexAM: Flexible Appearance-Motion Decomposition for Versatile Video Generation Control  [[PDF](https://arxiv.org/abs/2602.13185),[Page](https://github.com/IGL-HKUST/FlexAM)] ![Code](https://img.shields.io/github/stars/IGL-HKUST/FlexAM?style=social&label=Star)\n\n[arxiv 2026.03] Let Your Image Move with Your Motion! – Implicit Multi-Object Multi-Motion Transfer  [[PDF](https://arxiv.org/abs/2603.01000),[Page](https://github.com/Ethan-Li123/FlexiMMT)] ![Code](https://img.shields.io/github/stars/Ethan-Li123/FlexiMMT?style=social&label=Star)\n\n[arxiv 2026.03] FlowMotion: Training-Free Flow Guidance for Video Motion Transfer  [[PDF](https://arxiv.org/pdf/2603.06289)]\n\n[arxiv 2026.03]  Video2LoRA: Unified Semantic-Controlled Video Generation via Per-Reference-Video LoRA [[PDF](https://arxiv.org/pdf/2603.08210),[Page](https://github.com/BerserkerVV/Video2LoRA/)] ![Code](https://img.shields.io/github/stars/BerserkerVV/Video2LoRA/?style=social&label=Star)\n\n[arxiv 2026.03]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)\n\n\n\n## motion transfer | pose\n[arxiv 2023.05]LEO: Generative Latent Image Animator for Human Video Synthesis [[PDF](https://arxiv.org/abs/2305.03989),[Page](https://wyhsirius.github.io/LEO-project/)]\n\n*[arxiv 2023.03]Conditional Image-to-Video Generation with Latent Flow Diffusion Models [[PDF](https://arxiv.org/abs/2303.13744)]\n\n[arxiv 2023.07]DisCo: Disentangled Control for Referring Human Dance Generation in Real World\n[[PDF](https://arxiv.org/abs/2307.00040), [Page](https://disco-dance.github.io/)]\n\n[arxiv 2023.11]MagicAnimate: Temporally Consistent Human Image Animation using Diffusion Model [[PDF](https://arxiv.org/abs/2311.16498), [Page](https://showlab.github.io/magicanimate)]\n\n[arxiv 2023.12]DreaMoving: A Human Dance Video Generation Framework based on Diffusion Models [[PDF](https://arxiv.org/abs/2312.05107), [Page](https://dreamoving.github.io/dreamoving)]\n\n[arxiv 2023.12]MotionEditor: Editing Video Motion via Content-Aware Diffusion [[PDF](https://arxiv.org/abs/2311.18830),[Page](https://francis-rings.github.io/MotionEditor/)]\n\n[arxiv 2023.12]Customizing Motion in Text-to-Video Diffusion Models [[PDF](https://arxiv.org/abs/2312.04966),[Page](https://joaanna.github.io/customizing_motion/)]\n\n[arxiv 2023.12]MotionCrafter: One-Shot Motion Customization of Diffusion Models [[PDF](https://arxiv.org/abs/2312.05288)]\n\n[arxiv 2023.11]MagicDance: Realistic Human Dance Video Generation with Motions & Facial Expressions Transfer [[PDF](https://arxiv.org/abs/2311.12052), [Page](https://boese0601.github.io/magicdance/)]\n\n[arxiv 2023.11]MagicAnimate: Temporally Consistent Human Image Animation using Diffusion Model[[PDF](https://arxiv.org/abs/2311.16498),[Page](https://showlab.github.io/magicanimate)]\n\n[arxiv 2023.12] Animate Anyone: Consistent and Controllable Image-to-Video Synthesis for Character Animation[[PDF](https://arxiv.org/abs/2311.17117),[Page](https://humanaigc.github.io/animate-anyone/)]\n\n[arxiv 2024.01]Motion-Zero: Zero-Shot Moving Object Control Framework for Diffusion-Based Video Generation[[PDF](https://arxiv.org/abs/2401.10150)]\n\n[arxiv 2024.03]Spectral Motion Alignment for Video Motion Transfer using Diffusion Models[[PDF](https://arxiv.org/abs/2403.15249),[Page](https://geonyeong-park.github.io/spectral-motion-alignment/)]\n\n[arxiv 2024.03]Champ: Controllable and Consistent Human Image Animation with 3D Parametric Guidance [[PDF](https://arxiv.org/pdf/2403.14781.pdf),[Page](https://fudan-generative-vision.github.io/champ/#/)]\n\n[arxiv 2024.03]Make-Your-Anchor: A Diffusion-based 2D Avatar Generation Framework [[PDF](https://arxiv.org/abs/2403.16510), [Page](https://github.com/ICTMCG/Make-Your-Anchor)]\n\n[arxiv 2024.05]ReVideo: Remake a Video with Motion and Content Control [[PDF](https://arxiv.org/abs/2405.13865),[Page](https://mc-e.github.io/project/ReVideo/)]\n\n[arxiv 2024.05]VividPose: Advancing Stable Video Diffusion for Realistic Human Image Animation  [[PDF](https://arxiv.org/abs/2405.18156)]\n\n[arxiv 2024.05]Disentangling Foreground and Background Motion for Enhanced Realism in Human Video Generation [[PDF](https://arxiv.org/abs/2405.16393),[Page](https://liujl09.github.io/humanvideo_movingbackground/)]\n\n[arxiv 2024.05] MusePose: a Pose-Driven Image-to-Video Framework for Virtual Human Generation. [[PDF](),[Page](https://github.com/TMElyralab/MusePose?tab=readme-ov-file)]\n\n[arxiv 2024.05]MotionFollower: Editing Video Motion via Lightweight Score-Guided Diffusion [[PDF](https://arxiv.org/abs/2405.20325),[Page](https://francis-rings.github.io/MotionFollower/)]\n\n[arxiv 2024.06] UniAnimate: Taming Unified Video Diffusion Models for Consistent Human Image Animation[[PDF](https://arxiv.org/abs/2406.01188),[Page](https://unianimate.github.io/)]\n\n[arxiv 2024.06] Searching Priors Makes Text-to-Video Synthesis Better[[PDF](https://arxiv.org/abs/2406.03215),[Page](https://hrcheng98.github.io/Search_T2V/)]\n\n[arxiv 2024.06]MotionClone: Training-Free Motion Cloning for Controllable Video Generation[[PDF](https://arxiv.org/pdf/2406.05338)]\n\n[arxiv 2024.07]IDOL: Unified Dual-Modal Latent Diffusion for Human-Centric Joint Video-Depth Generation  [[PDF](https://arxiv.org/abs/2407.10937),[Page](https://yhzhai.github.io/idol/)]\n\n[arxiv 2024.07]HumanVid: Demystifying Training Data for Camera-controllable Human Image Animation  [[PDF](https://arxiv.org/abs/2407.17438),[Page](https://github.com/zhenzhiwang/HumanVid/)]\n\n[arxiv 2024.10] Replace Anyone in Videos [[PDF](https://arxiv.org/abs/2409.19911)]\n\n[arxiv 2024.10] MIMO: Controllable Character Video Synthesis with Spatial Decomposed Modeling [[PDF](http://arxiv.org/abs/2409.16160),[Page](https://menyifang.github.io/projects/MIMO/index.html)]\n\n[arxiv 2024.11] MikuDance: Animating Character Art with Mixed Motion Dynamics [[PDF](https://arxiv.org/abs/2411.08656),[Page](https://kebii.github.io/MikuDance/)]\n\n[arxiv 2024.11] StableAnimator: High-Quality Identity-Preserving Human Image Animation [[PDF](https://arxiv.org/abs/2411.17697),[Page](https://francis-rings.github.io/StableAnimator/)] ![Code](https://img.shields.io/github/stars/Francis-Rings/StableAnimator?style=social&label=Star)\n\n[arxiv 2024.11] AnimateAnything: Consistent and Controllable Animation for Video Generation [[PDF](https://arxiv.org/abs/2411.10836),[Page](https://yu-shaonian.github.io/Animate_Anything/)]\n\n\n[arxiv 2024.12]  Fleximo: Towards Flexible Text-to-Human Motion Video Generation [[PDF](https://arxiv.org/abs/2411.19459)] \n\n[arxiv 2024.12]  MotionFlow: Attention-Driven Motion Transfer in Video Diffusion Models [[PDF](https://arxiv.org/pdf/2412.05275.pdf),[Page](https://motionflow-diffusion.github.io/)] \n\n[arxiv 2024.12]  MotionShop: Zero-Shot Motion Transfer in Video Diffusion Models with Mixture of Score Guidance [[PDF](https://motionshop-diffusion.github.io/#),[Page](https://motionshop-diffusion.github.io/#)] ![Code](https://img.shields.io/github/stars/gemlab-vt/motionshop?style=social&label=Star)\n\n[arxiv 2024.12] DisPose: Disentangling Pose Guidance for Controllable Human Image Animation  [[PDF](https://arxiv.org/abs/2412.09349)]\n\n[arxiv 2024.12] Consistent Human Image and Video Generation with Spatially Conditioned Diffusion  [[PDF](https://arxiv.org/abs/2412.14531),[Page](https://github.com/ljzycmd/SCD)] ![Code](https://img.shields.io/github/stars/ljzycmd/SCD?style=social&label=Star)\n\n[arxiv 2024.12] VAST 1.0: A Unified Framework for Controllable and Consistent Video Generation  [[PDF](https://arxiv.org/pdf/2412.16677)]\n\n[arxiv 2025.01]  RAIN: Real-time Animation Of Infinite Video Stream [[PDF](https://arxiv.org/abs/2412.19489),[Page](https://pscgylotti.github.io/pages/RAIN/)] ![Code](https://img.shields.io/github/stars/Pscgylotti/RAIN?style=social&label=Star)\n\n[arxiv 2025.01]  X-Dyna: Expressive Dynamic Human Image Animation [[PDF](https://arxiv.org/abs/2501.10021),[Page](https://x-dyna.github.io/xdyna.github.io/)] ![Code](https://img.shields.io/github/stars/bytedance/X-Dyna?style=social&label=Star)\n\n[arxiv 2025.02] HumanDiT: Pose-Guided Diffusion Transformer for Long-form Human Motion Video Generation  [[PDF](https://arxiv.org/abs/2502.04847),[Page](https://agnjason.github.io/HumanDiT-page/)] \n\n[arxiv 2025.02] AnyCharV: Bootstrap Controllable Character Video Generation with Fine-to-Coarse Guidance  [[PDF](),[Page](https://anycharv.github.io/)] ![Code](https://img.shields.io/github/stars/AnyCharV/AnyCharV?style=social&label=Star)\n\n[arxiv 2025.03] Cosmos-Transfer1: Conditional World Generation with Adaptive Multimodal Control  [[PDF](https://arxiv.org/abs/2503.14492),[Page](https://github.com/nvidia-cosmos/cosmos-transfer1)] ![Code](https://img.shields.io/github/stars/nvidia-cosmos/cosmos-transfer1?style=social&label=Star)\n\n[arxiv 2025.03] Decouple and Track: Benchmarking and Improving Video Diffusion Transformers For Motion Transfer  [[PDF](https://arxiv.org/pdf/2503.17350),[Page](https://shi-qingyu.github.io/DeT.github.io/)] \n\n[arxiv 2025.04] HumanDreamer: Generating Controllable Human-Motion Videos via Decoupled Generation  [[PDF](https://arxiv.org/abs/2503.24026),[Page](https://humandreamer.github.io/)] ![Code](https://img.shields.io/github/stars/GigaAI-research/HumanDreamer?style=social&label=Star)\n\n[arxiv 2025.04] Multi-identity Human Image Animation with Structural Video Diffusion  [[PDF](https://arxiv.org/abs/2504.04126)]\n\n[arxiv 2025.04]  TokenMotion: Decoupled Motion Control via Token Disentanglement for Human-centric Video Generation [[PDF](https://arxiv.org/pdf/2504.08181)]\n\n[arxiv 2025.04] UniAnimate-DiT: Human Image Animation with Large-Scale Video Diffusion Transformer  [[PDF](https://arxiv.org/abs/2504.11289),[Page](https://github.com/ali-vilab/UniAnimate-DiT)] ![Code](https://img.shields.io/github/stars/ali-vilab/UniAnimate-DiT?style=social&label=Star)\n\n[arxiv 2025.04] Taming Consistency Distillation for Accelerated Human Image Animation  [[PDF](https://arxiv.org/pdf/2504.11143)]\n\n[arxiv 2025.04]  RealisDance-DiT: Simple yet Strong Baseline towards Controllable Character Animation in the Wild [[PDF](https://arxiv.org/abs/2504.14977),[Page](https://thefoxofsky.github.io/project_pages/RealisDance-DiT/index)] ![Code](https://img.shields.io/github/stars/damo-cv/RealisDance?style=social&label=Star)\n\n[arxiv 2025.04]  Unifying Precisely 3D-Enhanced Camera and Human Motion Controls for Video Generation [[PDF](https://arxiv.org/abs/2504.14899),[Page](https://ewrfcas.github.io/Uni3C/)] ![Code](https://img.shields.io/github/stars/ewrfcas/Uni3C?style=social&label=Star)\n\n[arxiv 2025.05] AnimateAnywhere: Rouse the Background in Human Image Animation  [[PDF](https://arxiv.org/pdf/2404.06451v1.pdf),[Page](https://animateanywhere.github.io/)] ![Code](https://img.shields.io/github/stars/liuxiaoyu1104/AnimateAnywhere?style=social&label=Star)\n\n[arxiv 2025.05] DanceTogether! Identity-Preserving Multi-Person Interactive Video Generation  [[PDF](https://arxiv.org/abs/2505.18078),[Page](https://dancetog.github.io/)] ![Code](https://img.shields.io/github/stars/yisuanwang/DanceTog?style=social&label=Star)\n\n[arxiv 2025.06] DreamDance: Animating Character Art via Inpainting Stable Gaussian Worlds  [[PDF](https://arxiv.org/pdf/2505.24733),[Page](https://kebii.github.io/DreamDance)] ![Code](https://img.shields.io/github/stars/Kebii/DreamDance?style=social&label=Star)\n\n[arxiv 2025.08]  PoseGen: In-Context LoRA Finetuning for Pose-Controllable Long Human Video Generation [[PDF](https://arxiv.org/pdf/2508.05091)]\n\n[arxiv 2025.08] Animate-X++: Universal Character Image Animation with Dynamic Backgrounds  [[PDF](https://arxiv.org/abs/2508.09454),[Page](https://lucaria-academy.github.io/Animate-X++/)] ![Code](https://img.shields.io/github/stars/Lucaria-Academy/Animate-X++?style=social&label=Star)\n\n[arxiv 2025.09] UniTransfer: Video Concept Transfer via Progressive Spatial and Timestep Decomposition  [[PDF](https://yu-shaonian.github.io/UniTransfer-Web/),[Page](https://yu-shaonian.github.io/UniTransfer-Web/)] \n\n[arxiv 2025.11] SteadyDancer: Harmonized and Coherent Human Image Animation with First-Frame Preservation  [[PDF](https://arxiv.org/abs/2511.19320),[Page](https://mcg-nju.github.io/steadydancer-web)] ![Code](https://img.shields.io/github/stars/MCG-NJU/SteadyDancer?style=social&label=Star)\n\n[arxiv 2025.12]  One-to-All Animation: Alignment-Free Character Animation and Image Pose Transfer [[PDF](https://arxiv.org/pdf/2511.22940),[Page](https://ssj9596.github.io/one-to-all-animation-project/)] ![Code](https://img.shields.io/github/stars/ssj9596/One-to-All-Animation?style=social&label=Star)\n\n[arxiv 2025.12] PoseAnything: Universal Pose-guided Video Generation with Part-aware Temporal Coherence  [[PDF](http://arxiv.org/abs/2512.13465),[Page](https://ryan-w2024.github.io/project/PoseAnything/)] ![Code](https://img.shields.io/github/stars/Ryan-w2024/PoseAnything?style=social&label=Star)\n\n[arxiv 2025.12]  EverybodyDance: Bipartite Graph–Based Identity Correspondence for Multi-Character Animation [[PDF](https://arxiv.org/pdf/2512.16360)]\n\n[arxiv 2025.12] High-Fidelity and Long-Duration Human Image Animation with Diffusion Transformer  [[PDF](https://arxiv.org/abs/2512.21905)]\n\n[arxiv 2026.01] MoCha: End-to-End Video Character Replacement without Structural Guidance  [[PDF](https://arxiv.org/pdf/2601.08587),[Page](https://github.com/Orange-3DV-Team/MoCha)] ![Code](https://img.shields.io/github/stars/Orange-3DV-Team/MoCha?style=social&label=Star)\n\n[arxiv 2026.01] CoDance: An Unbind-Rebind Paradigm for Robust Multi-Subject Animation  [[PDF](https://arxiv.org/abs/2601.11096),[Page](https://lucaria-academy.github.io/CoDance/)] \n\n[arxiv 2026.02]  MOTIONWEAVER: HOLISTIC 4D-ANCHORED FRAME-WORK FOR MULTI-HUMANOID IMAGE ANIMATION [[PDF](https://arxiv.org/pdf/2602.13326)]\n\n[arxiv 2026.03] Kling-MotionControl Technical Report  [[PDF](https://arxiv.org/pdf/2603.03160)]\n\n[arxiv 2026.03] AnyCrowd: Instance-Isolated Identity-Pose Binding for Arbitrary Multi-Character Animation  [[PDF](https://arxiv.org/abs/2603.15415),[Page](https://xiezhy6.github.io/anycrowd/)] \n\n[arxiv 2026.03]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)\n\n\n\n\n## autoregressive for video \n\n[arxiv 2024.12]  Autoregressive Video Generation without Vector Quantization [[PDF](https://arxiv.org/abs/2412.14169),[Page](https://github.com/baaivision/NOVA)] ![Code](https://img.shields.io/github/stars/baaivision/NOVA?style=social&label=Star)\n\n[arxiv 2025.03] AR-Diffusion: Asynchronous Video Generation with Auto-Regressive Diffusion  [[PDF](https://arxiv.org/abs/2503.07418),[Page](https://github.com/iva-mzsun/AR-Diffusion)] ![Code](https://img.shields.io/github/stars/iva-mzsun/AR-Diffusion?style=social&label=Star)\n\n[arxiv 2025.03]  Fast Autoregressive Video Generation with Diagonal Decoding[[PDF](https://arxiv.org/abs/2503.14070),[Page](https://www.microsoft.com/en-us/research/project/ar-videos/diagonal-decoding/)] \n\n[arxiv 2025.07] Lumos-1: On Autoregressive Video Generation from a Unified Model Perspective  [[PDF](https://arxiv.org/abs/2507.08801),[Page](https://github.com/alibaba-damo-academy/Lumos)] ![Code](https://img.shields.io/github/stars/alibaba-damo-academy/Lumos?style=social&label=Star)\n\n[arxiv 2025.10] Real-Time Motion-Controllable Autoregressive Video Diffusion  [[PDF](https://arxiv.org/abs/2510.08131),[Page](https://kesenzhao.github.io/AR-Drag.github.io/)] \n\n[arxiv 2026.03]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)\n\n\n## text \n[arxiv 2024.06]  Text-Animator: Controllable Visual Text Video Generation[[PDF](https://arxiv.org/abs/2406.17777),[Page](https://laulampaul.github.io/text-animator.html)]\n\n[arxiv 2026.03]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)\n\n\n\n## Camera \n[arxiv 2023.12]MotionCtrl: A Unified and Flexible Motion Controller for Video Generation [[PDF](https://arxiv.org/abs/2312.03641),[Page](https://wzhouxiff.github.io/projects/MotionCtrl/)]\n\n[arxiv 2024.02]Direct-a-Video: Customized Video Generation with User-Directed Camera Movement and Object Motion [[PDF](https://arxiv.org/pdf/2402.03162.pdf),[Page](https://direct-a-video.github.io/)]\n\n[arxiv 2024.04]CameraCtrl: Enabling Camera Control for Text-to-Video Generation [[PDF](https://arxiv.org/abs/2404.02101),[Page](https://hehao13.github.io/projects-CameraCtrl/)]\n\n[arxiv 2024.04]Customizing Text-to-Image Diffusion with Camera Viewpoint Control [[PDF](https://arxiv.org/abs/2404.12333),[Page](https://customdiffusion360.github.io/)]\n\n[arxiv 2024.04]MotionMaster: Training-free Camera Motion Transfer For Video Generation[[PDF](https://arxiv.org/abs/2404.15789)]\n\n[arxiv 2024.05] Video Diffusion Models are Training-free Motion Interpreter and Controller[[PDF](https://arxiv.org/abs/2405.14864),[Page](https://xizaoqu.github.io/moft/)]\n\n[arxiv 2024.05] VidvidDream Generating 3D Scene with Ambient Dynamics [[PDF](https://arxiv.org/abs/2405.20334),[Page](https://vivid-dream-4d.github.io/)]\n\n[arxiv 2024.06] CamCo: Camera-Controllable 3D-Consistent Image-to-Video Generation [[PDF](https://arxiv.org/abs/2406.02509),[Page](https://ir1d.github.io/CamCo/)]\n\n[arxiv 2024.06]Training-free Camera Control for Video Generation[[PDF](https://arxiv.org/abs/2406.10126),[Page](https://lifedecoder.github.io/CamTrol/)]\n\n[arxiv 2024.06] Image Conductor: Precision Control for Interactive Video Synthesis [[PDF](https://liyaowei-stu.github.io/project/ImageConductor/),[Page](https://liyaowei-stu.github.io/project/ImageConductor/)]\n\n[arxiv 2024.07]VD3D: Taming Large Video Diffusion Transformers for 3D Camera Control [[PDF](https://arxiv.org/abs/2407.12781),[Page](https://snap-research.github.io/vd3d/)]\n\n[arxiv 2024.08] DreamCinema: Cinematic Transfer with Free Camera and 3D Character [[PDF](https://arxiv.org/abs/2408.12601),[Page](https://liuff19.github.io/DreamCinema)]\n\n[arxiv 2024.09] CinePreGen: Camera Controllable Video Previsualization via Engine-powered Diffusion[[PDF](https://arxiv.org/abs/2408.17424)]\n\n[arxiv 2024.10] Boosting Camera Motion Control for Video Diffusion Transformers [[PDF](https://arxiv.org/abs/2410.10802),[Page](https://soon-yau.github.io/CameraMotionGuidance/)]\n\n[arxiv 2024.10] Cavia: Camera-controllable Multi-view Video Diffusion with View-Integrated Attention [[PDF](https://arxiv.org/abs/2410.10774),[Page](https://ir1d.github.io/Cavia/)]\n\n[arxiv 2024.10] CamI2V: Camera-Controlled Image-to-Video Diffusion Model [[PDF](https://arxiv.org/abs/2410.15957),[Page](https://zgctroy.github.io/CamI2V/)]\n\n[arxiv 2024.11] ReCapture: Generative Video Camera Controls for User-Provided Videos using Masked Video Fine-Tuning [[PDF](https://arxiv.org/abs/2411.05003),[Page](https://generative-video-camera-controls.github.io/)]\n\n[arxiv 2024.11] I2VControl-Camera: Precise Video Camera Control with Adjustable Motion Strength [[PDF](https://arxiv.org/abs/2411.06525)]\n\n[arxiv 2024.11] AnimateAnything: Consistent and Controllable Animation for Video Generation [[PDF](https://arxiv.org/abs/2411.10836),[Page](https://yu-shaonian.github.io/Animate_Anything/)] ![Code](https://img.shields.io/github/stars/yu-shaonian/AnimateAnything?style=social&label=Star)\n\n\n[arxiv 2024.12] I2VControl: Disentangled and Unified Video Motion Synthesis Control [[PDF](https://arxiv.org/abs/2411.17765),[Page](https://wanquanf.github.io/I2VControl)] \n\n[arxiv 2024.12] Generative Photography Scene-Consistent Camera Control for Realistic Text-to-Image Synthesis [[PDF](https://arxiv.org/abs/2412.02168),[Page](https://generative-photography.github.io/project/)] \n\n[arxiv 2024.12] CPA: Camera-pose-awareness Diffusion Transformer for Video Generation [[PDF](https://arxiv.org/abs/2412.01429)]\n\n[arxiv 2024.12] Latent-Reframe: Enabling Camera Control for Video Diffusion Model without Training [[PDF](https://arxiv.org/abs/2412.06029),[Page](https://latent-reframe.github.io/)]\n\n[arxiv 2024.12] SynCamMaster: Synchronizing Multi-Camera Video Generation from Diverse Viewpoints [[PDF](https://arxiv.org/abs/2412.07760),[Page](https://jianhongbai.github.io/SynCamMaster/)] ![Code](https://img.shields.io/github/stars/KwaiVGI/SynCamMaster?style=social&label=Star)\n\n\n[arxiv 2024.12] ObjCtrl-2.5D: Training-free Object Control with Camera Poses [[PDF](https://arxiv.org/pdf/2412.07721),[Page](https://wzhouxiff.github.io/projects/ObjCtrl-2.5D/)] ![Code](https://img.shields.io/github/stars/wzhouxiff/ObjCtrl-2.5D?style=social&label=Star)\n\n[arxiv 2024.12] Learning Camera Movement Control from Real-World Drone Videos [[PDF](https://arxiv.org/abs/2412.09620),[Page](https://dvgformer.github.io/)] ![Code](https://img.shields.io/github/stars/hou-yz/dvgformer?style=social&label=Star)\n\n[arxiv 2024.12] Switch-a-View: Few-Shot View Selection Learned from Edited Videos [[PDF](https://arxiv.org/abs/2412.18386),[Page](https://vision.cs.utexas.edu/projects/switch_a_view/)] \n\n[arxiv 2025.01] Free-Form Motion Control: A Synthetic Video Generation Dataset with Controllable Camera and Object Motions [[PDF](https://arxiv.org/abs/2501.01425),[Page](https://henghuiding.github.io/SynFMC/)] \n\n[arxiv 2025.01] OG3R: On Unifying Video Generation and Camera Pose Estimation [[PDF](https://arxiv.org/abs/2501.01409),[Page](https://paulchhuang.github.io/jog3rwebsite/)] \n\n[arxiv 2025.01] Go-with-the-Flow: Motion-Controllable Video Diffusion Models Using Real-Time Warped Noise [[PDF](https://arxiv.org/abs/2501.08331),[Page](https://vgenai-netflix-eyeline-research.github.io/Go-with-the-Flow/)] ![Code](https://img.shields.io/github/stars/VGenAI-Netflix-Eyeline-Research/Go-with-the-Flow?style=social&label=Star)\n\n[arxiv 2025.02] CineMaster: A 3D-Aware and Controllable Framework for Cinematic Text-to-Video Generation  [[PDF](https://arxiv.org/abs/2502.08639),[Page](https://cinemaster-dev.github.io/)] \n\n[arxiv 2025.02] FloVD: Optical Flow Meets Video Diffusion Model for Enhanced Camera-Controlled Video Synthesis  [[PDF](https://arxiv.org/pdf/2502.08244)]\n\n[arxiv 2025.02] VidCRAFT3: Camera, Object, and Lighting Control for Image-to-Video Generation  [[PDF](https://arxiv.org/pdf/2502.07531)]\n\n[arxiv 2025.02]  RealCam-I2V: Real-World Image-to-Video Generation with Interactive Complex Camera Control [[PDF](https://arxiv.org/abs/2502.10059),[Page](https://zgctroy.github.io/RealCam-I2V/)] ![Code](https://img.shields.io/github/stars/ZGCTroy/RealCam-I2V?style=social&label=Star)\n\n[arxiv 2025.03] GEN3C: 3D-Informed World-Consistent Video Generation with Precise Camera Control  [[PDF](https://arxiv.org/pdf/2503.03751),[Page](https://research.nvidia.com/labs/toronto-ai/GEN3C/)] ![Code](https://img.shields.io/github/stars/nv-tlabs/GEN3C?style=social&label=Star)\n\n[arxiv 2025.03] TrajectoryCrafter: Redirecting Camera Trajectory for Monocular Videos via Diffusion Models  [[PDF](https://arxiv.org/abs/2503.05638),[Page](https://trajectorycrafter.github.io/)] ![Code](https://img.shields.io/github/stars/TrajectoryCrafter/TrajectoryCrafter?style=social&label=Star)\n\n[arxiv 2025.03] Reangle-A-Video: 4D Video Generation as Video-to-Video Translation  [[PDF](https://arxiv.org/pdf/2503.09151)]\n\n[arxiv 2025.03]  CameraCtrl II: Dynamic Scene Exploration via Camera-controlled Video Diffusion Models  [[PDF](https://arxiv.org/abs/2503.10592),[Page](https://hehao13.github.io/Projects-CameraCtrl-II/)] \n\n[arxiv 2025.03] I2V3D: Controllable image-to-video generation with 3D guidance  [[PDF](https://arxiv.org/pdf/2503.09733),[Page](https://bestzzhang.github.io/I2V3D/)] \n\n[arxiv 2025.03]  ReCamMaster: Camera-Controlled Generative Rendering from A Single Video [[PDF](https://arxiv.org/abs/2503.11647),[Page](https://jianhongbai.github.io/ReCamMaster/)] ![Code](https://img.shields.io/github/stars/KwaiVGI/ReCamMaster?style=social&label=Star)\n\n[arxiv 2025.03] FullDiT: Multi-Task Video Generative Foundation Model with Full Attention  [[PDF](https://arxiv.org/pdf/2503.19907v1),[Page](https://fulldit.github.io/)] ![Code]\n\n[arxiv 2025.04] OmniCam: Unified Multimodal Video Generation via Camera Control  [[PDF](https://arxiv.org/abs/2504.02312)]\n\n[arxiv 2025.04]  CamContextI2V: Context-aware Controllable Video Generation [[PDF](https://arxiv.org/abs/2504.06022),[Page](https://github.com/LDenninger/CamContextI2V)] ![Code](https://img.shields.io/github/stars/LDenninger/CamContextI2V?style=social&label=Star)\n\n[arxiv 2025.04]  RealCam-Vid: High-resolution Video Dataset with Dynamic Scenes and Metric-scale Camera Movements [[PDF](https://arxiv.org/abs/2504.08212),[Page](https://github.com/ZGCTroy/RealCam-Vid)] ![Code](https://img.shields.io/github/stars/ZGCTroy/RealCam-Vid?style=social&label=Star)\n\n[arxiv 2025.04]  TokenMotion: Decoupled Motion Control via Token Disentanglement for Human-centric Video Generation [[PDF](https://arxiv.org/pdf/2504.08181)]\n\n[arxiv 2025.04] GenDoP: Auto-regressive Camera Trajectory Generation as a Director of Photography  [[PDF](https://arxiv.org/abs/2504.07083),[Page](https://kszpxxzmc.github.io/GenDoP/)] ![Code](https://img.shields.io/github/stars/3DTopia/GenDoP?style=social&label=Star)\n\n[arxiv 2025.04] Modular-Cam: Modular Dynamic Camera-view Video Generation with LLM  [[PDF](https://arxiv.org/abs/2504.12048),[Page](https://modular-cam.github.io/)] \n\n[arxiv 2025.04]  CamMimic: Zero-Shot Image to Camera Motion Personalized Video Generation using Diffusion Models [[PDF](https://arxiv.org/abs/2504.09472),[Page](https://cammimic.github.io/)]\n\n[arxiv 2025.04] CameraBench: Towards Understanding Camera Motions in Any Video  [[PDF](https://arxiv.org/abs/2504.15376),[Page](https://linzhiqiu.github.io/papers/camerabench/)] \n\n[arxiv 2025.04]  Unifying Precisely 3D-Enhanced Camera and Human Motion Controls for Video Generation [[PDF](https://arxiv.org/abs/2504.14899),[Page](https://ewrfcas.github.io/Uni3C/)] ![Code](https://img.shields.io/github/stars/ewrfcas/Uni3C?style=social&label=Star)\n\n[arxiv 2025.04] Dynamic Camera Poses and Where to Find Them  [[PDF](https://arxiv.org/abs/2504.17788),[Page](https://research.nvidia.com/labs/dir/dynpose-100k/)] \n\n[arxiv 2025.06]  CamCloneMaster: Enabling Reference-based Camera Control for Video Generation [[PDF](https://arxiv.org/abs/2506.03140),[Page](https://camclonemaster.github.io/)] \n\n[arxiv 2025.06] Vid-CamEdit: Video Camera Trajectory Editing with Generative Rendering from Estimated Geometry  [[PDF](https://arxiv.org/abs/xxx),[Page](https://cvlab-kaist.github.io/Vid-CamEdit/)] ![Code](https://img.shields.io/github/stars/cvlab-kaist/Vid-CamEdit?style=social&label=Star)\n\n[arxiv 2025.07] Physics-Grounded Motion Forecasting via Equation Discovery for Trajectory-Guided Image-to-Video Generation  [[PDF](https://arxiv.org/abs/2507.06830)】\n\n[arxiv 2025.10]  From Seeing to Predicting: A Vision-Language Framework for Trajectory Forecasting and Controlled Video Generation [[PDF](https://arxiv.org/abs/2510.00806)]\n\n[arxiv 2025.10]  3D Scene Prompting for Scene-Consistent Camera-Controllable Video Generation [[PDF](https://arxiv.org/abs/2510.14945),[Page](https://cvlab-kaist.github.io/3DScenePrompt/)] ![Code](https://img.shields.io/github/stars/cvlab-kaist/3DScenePrompt?style=social&label=Star)\n\n[arxiv 2025.10] VividCam: Learning Unconventional Camera Motions from Virtual Synthetic Videos  [[PDF](https://arxiv.org/abs/2510.24904),[Page](https://wuqiuche.github.io/VividCamDemoPage/)] \n\n[arxiv 2025.10]  See4D: Pose-Free 4D Generation via Auto-Regressive Video Inpainting [[PDF](https://arxiv.org/abs/2510.26796),[Page](https://see-4d.github.io/)] \n\n[arxiv 2025.11] PostCam: Camera-Controllable Novel-View Video Generation with Query-Shared Cross-Attention  [[PDF](https://arxiv.org/abs/2511.17185),[Page](https://cccqaq.github.io/PostCam.github.io/)] ![Code](https://img.shields.io/github/stars/zju3dv/PostCam?style=social&label=Star)\n\n[arxiv 2025.12] DualCamCtrl: Dual-Branch Diffusion Model for Geometry-Aware Camera-Controlled Video Generation  [[PDF](https://arxiv.org/abs/2511.23127),[Page](https://soyouthinkyoucantell.github.io/dualcamctrl-page/)] ![Code](https://img.shields.io/github/stars/EnVision-Research/DualCamCtrl?style=social&label=Star)\n\n[arxiv 2026.01] Plenoptic Video Generation  [[PDF](https://drive.google.com/file/d/19f27mjcPv3VOcKWLRvGJNt6fR2Rck5gT/view?usp=sharing),[Page](https://research.nvidia.com/labs/dir/plenopticdreamer/)] \n\n[arxiv 2026.01] VerseCrafter: Dynamic Realistic Video World Model with 4D Geometric Control  [[PDF](https://arxiv.org/pdf/2601.05138),[Page](https://sixiaozheng.github.io/VerseCrafter_page/)] ![Code](https://img.shields.io/github/stars/TencentARC/VerseCrafter?style=social&label=Star)\n\n[arxiv 2026.01] Efficient Camera-Controlled Video Generation of Static Scenes via Sparse Diffusion and 3D Rendering  [[PDF](https://arxiv.org/abs/2601.09697),[Page](https://ayushtewari.com/projects/srender/)] \n\n[arxiv 2026.01] CamPilot: Improving Camera Control in Video Diffusion Model with Efficient Camera Reward Feedback  [[PDF](https://arxiv.org/abs/2512.16915),[Page](https://a-bigbao.github.io/CamPilot/)]\n\n[arxiv 2026.01]  3DiMo: 3D-Aware Implicit Motion Control for View-Adaptive Human Video Generation [[PDF](https://arxiv.org/abs/2602.03796),[Page](https://hjrphoebus.github.io/3DiMo/)] \n\n[arxiv 2026.02]  ReRoPE: Repurposing RoPE for Relative Camera Control [[PDF](https://arxiv.org/abs/2602.08068)]\n\n[arxiv 2026.03] CamDirector: Towards Long-Term Coherent Video Trajectory Editing  [[PDF](https://arxiv.org/abs/2603.02256),[Page](https://yinkejia.github.io/CamDirector-Project-Page/)] ![Code](https://img.shields.io/github/stars/yinkejia/CamDirector?style=social&label=Star)\n\n[arxiv 2026.03] FaceCam: Portrait Video Camera Control via Scale-Aware Conditioning  [[PDF](https://arxiv.org/abs/2603.05506),[Page](https://www.wlyu.me/FaceCam/)] ![Code](https://img.shields.io/github/stars/weijielyu/FaceCam?style=social&label=Star)\n\n[arxiv 2026.03] ConfCtrl: Enabling Precise Camera Control in Video Diffusion via Confidence-Aware Interpolation  [[PDF](https://arxiv.org/abs/2603.09819)]\n\n[arxiv 2026.03] CamLit: Unified Video Diffusion with Explicit Camera and Lighting Control  [[[PDF](https://arxiv.org/abs/2603.14241)]]\n\n\n\n[arxiv 2026.03]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)\n\n\n## lighting \n[arxiv 2025.02] Light-A-Video: Training-free Video Relighting via Progressive Light Fusion  [[PDF](https://arxiv.org/abs/2502.08590),[Page](https://bujiazi.github.io/light-a-video.github.io/)] ![Code](https://img.shields.io/github/stars/bcmi/Light-A-Video/?style=social&label=Star)\n\n[arxiv 2025.06]  TC-Light: Temporally Consistent Relighting for Dynamic Long Videos [[PDF](https://arxiv.org/abs/2506.18904),[Page](https://dekuliutesla.github.io/tclight/)] ![Code](https://img.shields.io/github/stars/Linketic/TC-Light?style=social&label=Star)\n\n[arxiv 2025.10]  UniLumos: Fast and Unified Image and Video Relighting with Physics-Plausible Feedback [[PDF](https://arxiv.org/abs/2511.01678),[Page](https://github.com/alibaba-damo-academy/Lumos-Custom)] ![Code](https://img.shields.io/github/stars/alibaba-damo-academy/Lumos-Custom?style=social&label=Star)\n\n[arxiv 2026.03]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)\n\n\n## inpainting / outpainting \n[MM 2023.09]Hierarchical Masked 3D Diffusion Model for Video Outpainting [[PDF](https://arxiv.org/abs/2309.02119)]\n\n[arxiv 2023.11]Flow-Guided Diffusion for Video Inpainting [[PDF](https://arxiv.org/abs/2311.15368)]\n\n[arxiv 2024.01]ActAnywhere: Subject-Aware Video Background Generation [[PDF](https://arxiv.org/abs/2401.10822), [Page](https://actanywhere.github.io/)]\n\n[arxiv 2024.03]CoCoCo: Improving Text-Guided Video Inpainting for Better Consistency, Controllability and Compatibility [[PDF](https://arxiv.org/abs/2403.12035),[Page](https://cococozibojia.github.io/)]\n\n[arxiv 2024.03]Be-Your-Outpainter: Mastering Video Outpainting through Input-Specific Adaptation [[PDF](https://arxiv.org/abs/2403.13745),[Page](https://github.com/G-U-N/Be-Your-Outpainter)]\n\n[arxiv 2024.04]AudioScenic: Audio-Driven Video Scene Editing [[PDF](https://arxiv.org/abs/2404.16581)]\n\n[arxiv 2024.05]Semantically Consistent Video Inpainting with Conditional Diffusion Models [[PDF(https://arxiv.org/abs/2405.00251)]\n\n[arxiv 2024.05]ReVideo: Remake a Video with Motion and Content Control [[PDF](https://arxiv.org/abs/2405.13865),[Page](https://mc-e.github.io/project/ReVideo/)]\n\n[arxiv 2024.08]Video Diffusion Models are Strong Video Inpainter  [[PDF](https://arxiv.org/abs/2408.11402)]\n\n[arxiv 2024.09] Follow-Your-Canvas: Higher-Resolution Video Outpainting with Extensive Content Generation [[PDF](https://arxiv.org/abs/2409.01055),[Page](https://github.com/mayuelala/FollowYourCanvas)]\n\n[arxiv 2024.12]  UniPaint: Unified Space-time Video Inpainting via Mixture-of-Experts [[PDF](https://arxiv.org/abs/2412.06340)]\n\n[arxiv 2024.12] OmniDrag: Enabling Motion Control for Omnidirectional Image-to-Video Generation  [[PDF](https://arxiv.org/abs/2412.09623),[Page](https://lwq20020127.github.io/OmniDrag/)] \n\n[arxiv 2025.01] DiffuEraser: A Diffusion Model for Video Inpainting  [[PDF](https://arxiv.org/abs/2501.10018),[Page](https://github.com/lixiaowen-xw/DiffuEraser)] ![Code](https://img.shields.io/github/stars/lixiaowen-xw/DiffuEraser?style=social&label=Star)\n\n[arxiv 2025.06]  MiniMax-Remover: Taming Bad Noise Helps Video Object Removal [[PDF](https://arxiv.org/abs/2505.24873),[Page](https://minimax-remover.github.io/)] ![Code](https://img.shields.io/github/stars/zibojia/MiniMax-Remover?style=social&label=Star)\n\n[arxiv 2025.06]  Follow-Your-Creation: Empowering 4D Creation through Video Inpainting [[PDF](https://arxiv.org/abs/2506.04590),[Page](https://follow-your-creation.github.io/)] ![Code](https://img.shields.io/github/stars/mayuelala/FollowYourCreation?style=social&label=Star)\n\n[arxiv 2025.06] OutDreamer: Video Outpainting with a Diffusion Transformer  [[PDF](https://arxiv.org/abs/2506.22298)]\n\n[arxiv 2025.10] VideoCanvas: Unified Video Completion from Arbitrary Spatiotemporal Patches via In-Context Conditioning  [[PDF](https://arxiv.org/abs/2510.08555),[Page](https://onevfall.github.io/project_page/videocanvas/#teaser)]\n\n[arxiv 2025.11] Unified Long Video Inpainting and Outpainting via Overlapping High-Order Co-Denoising  [[PDF](https://arxiv.org/pdf/2511.03272)]\n\n\n[arxiv 2026.03]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)\n\n\n## Video Quality \n[arxiv 2024.03]VideoElevator : Elevating Video Generation Quality with Versatile Text-to-Image Diffusion Models[[PDF](https://arxiv.org/abs/2403.05438),[Page](https://videoelevator.github.io/)]\n\n\n\n\n## super-resolution\n[arxiv 2023.11]Enhancing Perceptual Quality in Video Super-Resolution through Temporally-Consistent Detail Synthesis using Diffusion Models [[PDF](https://arxiv.org/abs/2311.15908)]\n\n[arxiv 2023.12]Upscale-A-Video: Temporal-Consistent Diffusion Model for Real-World Video Super-Resolution [[PDF](https://arxiv.org/abs/2312.06640),[Page](https://shangchenzhou.com/projects/upscale-a-video/)]\n\n[arxiv 2023.12]Video Dynamics Prior: An Internal Learning Approach for Robust Video Enhancements [[PDF](https://arxiv.org/abs/2312.07835),[Page](http://www.cs.umd.edu/~gauravsh/vdp.html)]\n\n[arxiv 2024.03]Learning Spatial Adaptation and Temporal Coherence in Diffusion Models for Video Super-Resolution [[PDF](https://arxiv.org/abs/2403.17000)]\n\n[arxiv 2024.04]VideoGigaGAN: Towards Detail-rich Video Super-Resolution [[PDF](https://videogigagan.github.io/assets/paper.pdf), [Page](https://videogigagan.github.io/)]\n\n[arxiv 2024.06] EvTexture: Event-driven Texture Enhancement for Video Super-Resolution [[PDF](https://arxiv.org/abs/2406.13457),[Page](https://dachunkai.github.io/evtexture.github.io/)]\n\n[arxiv 2024.06]  DiffIR2VR-Zero:Zero-Shot Video Restoration with Diffusion-based Image Restoration Models [[PDF](https://arxiv.org/abs/2304.06706),[Page](https://jimmycv07.github.io/DiffIR2VR_web/)]\n\n[arxiv 2024.07]  DiffIR2VR-Zero: Zero-Shot Video Restoration with Diffusion-based Image Restoration Models [[PDF](https://arxiv.org/abs/2407.01519),[Page](https://jimmycv07.github.io/DiffIR2VR_web/)]\n\n[arxiv 2024.07] Zero-shot Video Restoration and Enhancement Using Pre-Trained Image Diffusion Model  [[PDF](https://arxiv.org/abs/2407.01960)]\n\n[arxiv 2024.07] VEnhancer: Generative Space-Time Enhancement for Video Generation[[PDF](https://arxiv.org/abs/2407.07667),[Page](https://vchitect.github.io/VEnhancer-project/)]\n\n\n[arxiv 2024.07] Noise Calibration: Plug-and-play Content-Preserving Video Enhancement using Pre-trained Video Diffusion Models [[PDF](https://arxiv.org/abs/2407.10285),[Page](https://yangqy1110.github.io/NC-SDEdit/)]\n\n[arxiv 2024.07]  RealViformer: Investigating Attention for Real-World Video Super-Resolution [[PDF](),[Page]()]\n\n[arxiv 2024.08]Kalman-Inspired Feature Propagation for Video Face Super-Resolution[[PDF](https://arxiv.org/abs/2408.05205),[Page](https://jnjaby.github.io/projects/KEEP/)]\n\n[arxiv 2024.08] Unrolled Decomposed Unpaired Learning for Controllable Low-Light Video Enhancement  [[PDF](https://arxiv.org/abs/2408.12316),[Page](https://github.com/lingyzhu0101/UDU)]\n\n[arxiv 2024.08] SeeClear: Semantic Distillation Enhances Pixel Condensation for Video Super-Resolution  [[PDF](https://arxiv.org/abs/2410.05799),[Page](https://github.com/Tang1705/SeeClear-NeurIPS24)]\n\n[arxiv 2025.01]  SeedVR: Seeding Infinity in Diffusion Transformer Towards Generic Video Restoration [[PDF](https://arxiv.org/abs/2501.01320),[Page](https://iceclear.github.io/projects/seedvr/)] \n\n[arxiv 2025.01]  STAR: Spatial-Temporal Augmentation with Text-to-Video Models for Real-World Video Super-Resolution [[PDF](https://arxiv.org/abs/2501.02976),[Page](https://nju-pcalab.github.io/projects/STAR/)] ![Code](https://img.shields.io/github/stars/NJU-PCALab/STAR?style=social&label=Star)\n\n[arxiv 2025.01]  SVFR: A Unified Framework for Generalized Video Face Restoration [[PDF](https://arxiv.org/pdf/2501.01235),[Page](https://wangzhiyaoo.github.io/SVFR/)] ![Code](https://img.shields.io/github/stars/wangzhiyaoo/SVFR?style=social&label=Star)\n\n[arxiv 2025.01]  DiffVSR: Enhancing Real-World Video Super-Resolution with Diffusion Models for Advanced Visual Quality and Temporal Consistency [[PDF](https://arxiv.org/abs/2501.10110),[Page](https://xh9998.github.io/DiffVSR-project/)] ![Code](https://img.shields.io/github/stars/xh9998/DiffVSR?style=social&label=Star)\n\n[arxiv 2025.03]  Temporal-Consistent Video Restoration with Pre-trained Diffusion Models [[PDF](https://arxiv.org/pdf/2503.14863)]\n\n[arxiv 2025.04]  EvTexture: Event-driven Texture Enhancement for Video Super-Resolution [[PDF](https://arxiv.org/abs/2406.13457),[Page](https://dachunkai.github.io/evtexture.github.io/)] ![Code](https://img.shields.io/github/stars/DachunKai/EvTexture?style=social&label=Star)\n\n[arxiv 2025.04]  RepNet-VSR: Reparameterizable Architecture for High-Fidelity Video Super-Resolution [[PDF](https://arxiv.org/abs/2504.15649)]\n\n[arxiv 2025.05]  DOVE: Efficient One-Step Diffusion Model for Real-World Video Super-Resolution [[PDF](https://arxiv.org/pdf/2505.16239),[Page](https://github.com/zhengchen1999/DOVE)] ![Code](https://img.shields.io/github/stars/zhengchen1999/DOVE?style=social&label=Star)\n\n[arxiv 2025.06] SeedVR2: One-Step Video Restoration via Diffusion Adversarial Post-Training  [[PDF](https://arxiv.org/abs/2506.05301),[Page](https://iceclear.github.io/projects/seedvr2/)] ![Code](https://img.shields.io/github/stars/IceClear/SeedVR2?style=social&label=Star)\n\n[arxiv 2025.06]  DualX-VSR: Dual Axial Spatial×Temporal Transformer for Real-World Video Super-Resolution without Motion Compensation [[PDF](https://arxiv.org/pdf/2506.04830)]\n\n[arxiv 2025.06] LiftVSR: Lifting Image Diffusion to Video Super-Resolution via Hybrid Temporal Modeling with Only 4  [[PDF](https://arxiv.org/abs/2506.08529),[Page](https://kopperx.github.io/projects/liftvsr)] ![Code](https://img.shields.io/github/stars/kopperx/LiftVSR?style=social&label=Star)\n\n[arxiv 2025.06]  MambaVSR: Content-Aware Scanning State Space Model for Video Super-Resolution [[PDF](https://arxiv.org/pdf/2506.11768)]\n\n[arxiv 2025.06]  One-Step Diffusion for Detail-Rich and Temporally Consistent Video Super-Resolution [[PDF](https://arxiv.org/abs/2506.15591),[Page](https://github.com/yjsunnn/DLoRAL)] ![Code](https://img.shields.io/github/stars/yjsunnn/DLoRAL?style=social&label=Star)\n\n[arxiv 2025.06] FastInit: Fast Noise Initialization for Temporally Consistent Video Generation  [[PDF](https://arxiv.org/pdf/2506.16119)]\n\n[arxiv 2025.06]  SimpleGVR: A Simple Baseline for Latent-Cascaded Video Super-Resolution [[PDF](https://arxiv.org/pdf/2506.19838),[Page](https://simplegvr.github.io/)] \n\n[arxiv 2025.07] TURBOVSR: Fantastic Video Upscalers and Where to Find Them  [[PDF](https://arxiv.org/pdf/2506.23618)]\n\n[arxiv 2025.07]  VSRM: A Robust Mamba-Based Framework for Video Super-Resolution [[PDF](https://arxiv.org/abs/2506.22762)]\n\n[arxiv 2025.07] DAM-VSR: Disentanglement of Appearance and Motion for Video Super-Resolution  [[PDF](https://arxiv.org/abs/2507.01012),[Page](https://kongzhecn.github.io/projects/dam-vsr/)] ![Code](https://img.shields.io/github/stars/kongzhecn/DAM-VSR?style=social&label=Star)\n\n[arxiv 2025.08] Vivid-VR: Distilling Concepts from Text-to-Video Diffusion Transformer for Photorealistic Video Restoration  [[PDF](https://arxiv.org/abs/2508.14483),[Page](https://csbhr.github.io/projects/vivid-vr/)] ![Code](https://img.shields.io/github/stars/csbhr/Vivid-VR?style=social&label=Star)\n\n[arxiv 2025.08]  CineScale: Free Lunch in High-Resolution Cinematic Visual Generation [[PDF](https://arxiv.org/abs/2508.15774),[Page](https://eyeline-labs.github.io/CineScale/)] ![Code](https://img.shields.io/github/stars/Eyeline-Labs/CineScale?style=social&label=Star)\n\n[arxiv 2025.10]  Continuous Space-Time Video Super-Resolution with 3D Fourier Fields [[PDF](https://arxiv.org/abs/2509.26325),[Page](https://v3vsr.github.io/)] \n\n[arxiv 2025.10] PatchVSR: Breaking Video Diffusion Resolution Limits with Patch-wise Video Super-Resolution  [[PDF](https://arxiv.org/abs/2509.26025)]\n\n[arxiv 2025.10]  UniMMVSR: A Unified Multi-Modal Framework for Cascaded Video Super-Resolution [[PDF](https://arxiv.org/abs/2510.08143),[Page](https://shiandu.github.io/UniMMVSR-website/)] \n\n[arxiv 2025.10] InfVSR: Breaking Length Limits of Generic Video Super-Resolution  [[PDF](https://arxiv.org/abs/2510.00948),[Page](https://github.com/Kai-Liu001/InfVSR)] ![Code](https://img.shields.io/github/stars/Kai-Liu001/InfVSR?style=social&label=Star)\n\n\n[arxiv 2025.10] FlashVSR: Towards Real-Time Diffusion-Based Streaming Video Super-Resolution  [[PDF](https://arxiv.org/abs/2510.12747),[Page](https://zhuang2002.github.io/FlashVSR/)] ![Code](https://img.shields.io/github/stars/OpenImagingLab/FlashVSR?style=social&label=Star)\n\n[arxiv 2025.10]  UltraGen: High-Resolution Video Generation with Hierarchical Attention [[PDF](https://arxiv.org/pdf/2510.18775),[Page](https://sjtuplayer.github.io/projects/UltraGen/)] ![Code](https://img.shields.io/github/stars/sjtuplayer/UltraGen?style=social&label=Star)\n\n[arxiv 2025.10]  Restore Text First, Enhance Image Later: Two-Stage Scene Text Image Super-Resolution with Glyph Structure Guidance [[PDF](http://arxiv.org/abs/2510.21590),[Page](https://tony-lowe.github.io/TIGER_project_page/)] ![Code](https://img.shields.io/github/stars/Tony-Lowe/TIGER_project_page?style=social&label=Star)\n\n[arxiv 2025.12]  Transform Trained Transformer: Accelerating Naive 4K Video Generation Over 10 [[PDF](https://arxiv.org/abs/2512.13492),[Page](https://zhangzjn.github.io/projects/T3-Video/)] ![Code](https://img.shields.io/github/stars/zhangzjn/T3-Video?style=social&label=Star)\n\n[arxiv 2025.12]  HiStream: Efficient High-Resolution Video Generation via Redundancy Eliminated Streaming [[PDF](https://arxiv.org/abs/2512.21338),[Page](http://haonanqiu.com/projects/HiStream.html)] ![Code](https://img.shields.io/github/stars/arthur-qiu/HiStream?style=social&label=Star)\n\n[arxiv 2025.12] Stream-DiffVSR: Low-Latency Streamable Video Super-Resolution via Auto-Regressive Diffusion  [[PDF](https://arxiv.org/abs/2512.23709),[Page](https://jamichss.github.io/stream-diffvsr-project-page/)] \n\n[arxiv 2026.01]  Zero-Shot Video Restoration and Enhancement with Assistance of Video Diffusion Models [[PDF](https://arxiv.org/abs/2601.21922)]\n\n[arxiv 2026.03] Improved Adversarial Diffusion Compression for Real-World Video Super-Resolution  [[PDF](https://arxiv.org/abs/2603.00458)]\n\n[arxiv 2026.03] FrescoDiffusion: 4K Image-to-Video with Prior-Regularized Tiled Diffusion  [[PDF](https://arxiv.org/abs/2603.17555),[Page](https://f2v.pages.dev/)]\n\n[arxiv 2026.03] ViBe: Ultra-High-Resolution Video Synthesis Born from Pure Images  [[PDF](https://arxiv.org/abs/2603.23326)] ![Code](https://img.shields.io/github/stars/WillWu111/ViBe?style=social&label=Star)\n\n[arxiv 2026.03] DUO-VSR: Dual-Stream Distillation for One-Step Video Super-Resolution  [[PDF](https://arxiv.org/abs/2603.22271),[Page](https://cszy98.github.io/DUO-VSR/)] ![Code](https://img.shields.io/github/stars/cszy98/DUO-VSR?style=social&label=Star)\n\n[arxiv 2026.03]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)\n\n\n## restoration \n[arxiv 2024.08] Towards Real-world Event-guided Low-light Video Enhancement and Deblurring[[PDF](https://arxiv.org/abs/2408.14916)]\n\n[arxiv 2024.08] Cross-Modal Temporal Alignment for Event-guided Video Deblurring[[PDF](https://arxiv.org/abs/2408.14930)]\n\n[arxiv 2025.01]  SVFR: A Unified Framework for Generalized Video Face Restoration [[PDF](https://arxiv.org/pdf/2501.01235),[Page](https://wangzhiyaoo.github.io/SVFR/)] ![Code](https://img.shields.io/github/stars/wangzhiyaoo/SVFR?style=social&label=Star)\n\n[arxiv 2025.02] Human Body Restoration with One-Step Diffusion Model and A New Benchmark  [[PDF](https://arxiv.org/abs/2502.01411),[Page](https://github.com/gobunu/OSDHuman)] ![Code](https://img.shields.io/github/stars/gobunu/OSDHuman?style=social&label=Star)\n\n[arxiv 2025.10]  MoA-VR: A Mixture-of-Agents System Towards All-in-One Video Restoration [[PDF](https://arxiv.org/abs/2510.08508),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)\n\n[arxiv 2026.03]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)\n\n\n## downstream apps\n[arxiv 2023.11]Breathing Life Into Sketches Using Text-to-Video Priors [[PDF](https://arxiv.org/abs/2311.13608),[Page](https://livesketch.github.io/)]\n\n[arxiv 2023.11]Flow-Guided Diffusion for Video Inpainting [[PDF](https://arxiv.org/abs/2311.15368)]\n\n[arxiv 2024.02]Animated Stickers: Bringing Stickers to Life with Video Diffusion [[PDF](https://arxiv.org/abs/2402.06088)]\n\n[arxiv 2024.03]DriveDreamer-2: LLM-Enhanced World Models for Diverse Driving Video Generation [[PDF](https://arxiv.org/abs/2403.06845),[Page](https://drivedreamer2.github.io/)]\n\n[arxiv 2024.03]Intention-driven Ego-to-Exo Video Generation [[PDF](https://arxiv.org/abs/2403.09194)]\n\n[arxiv 2024.04]PhysDreamer: Physics-Based Interaction with 3D Objects via Video Generation [[PDF](https://arxiv.org/abs/2404.13026),[Page](https://physdreamer.github.io/)]\n\n[arxiv 2024.04]Tunnel Try-on: Excavating Spatial-temporal Tunnels for High-quality Virtual Try-on in Videos [[PDF](https://arxiv.org/abs/2404.17571),[Page](https://mengtingchen.github.io/tunnel-try-on-page/)]\n\n[arxiv 2024.04]Dance Any Beat: Blending Beats with Visuals in Dance Video Generation [[PDF](https://arxiv.org/abs/2405.09266), [Page](https://dabfusion.github.io/)]\n\n[arxiv 2024.05] ViViD: Video Virtual Try-on using Diffusion Models  [[PDF](https://arxiv.org/abs/2405.11794),[Page](https://becauseimbatman0.github.io/ViViD)]\n\n[arxiv 2024.05] VITON-DiT: Learning In-the-Wild Video Try-On from Human Dance Videos via Diffusion Transformers[[PDF](https://arxiv.org/abs/2405.18326),[Page](https://zhengjun-ai.github.io/viton-dit-page/)]\n\n[arxiv 2024.07]WildVidFit: Video Virtual Try-On in the Wild via Image-Based Controlled Diffusion Models [[PDF](https://arxiv.org/abs/2407.10625), [Page](https://wildvidfit-project.github.io/)]\n\n[arxiv 2024.07]Streetscapes: Large-scale Consistent Street View Generation Using Autoregressive Video Diffusion\n [[PDF](https://arxiv.org/abs/2407.13759), [Page](https://boyangdeng.com/streetscapes)]\n\n[arxiv 2024.08] Panacea+: Panoramic and Controllable Video Generation for Autonomous Driving[[PDF](https://arxiv.org/abs/2408.07605), [Page](https://panacea-ad.github.io/)]\n\n[arxiv 2024.08]Diffusion Models Are Real-Time Game Engines [[PDF](https://arxiv.org/abs/2408.14837), [Page](https://gamengen.github.io/)]\n\n[arxiv 2024.09] DriveScape: Towards High-Resolution Controllable Multi-View Driving Video Generation [[PDF](https://arxiv.org/abs/2409.05463), [Page](https://metadrivescape.github.io/papers_project/drivescapev1/index.html)]\n\n[arxiv 2024.09] Pose-Guided Fine-Grained Sign Language Video Generation [[PDF](https://arxiv.org/abs/2409.16709)]\n\n[arxiv 2024.10] VidPanos: Generative Panoramic Videos from Casual Panning Videos [[PDF](https://arxiv.org/abs/2410.13832), [Page](https://vidpanos.github.io/)]\n\n[arxiv 2024.11]  GameGen-X: Interactive Open-world Game Video Generation[[PDF](https://arxiv.org/abs/2411.00769), [Page](https://github.com/GameGen-X/GameGen-X)]\n\n[arxiv 2024.11] Fashion-VDM: Video Diffusion Model for Virtual Try-On [[PDF](https://arxiv.org/abs/2411.00225), [Page]()]\n\n[arxiv 2024.11] EgoVid-5M: A Large-Scale Video-Action Dataset for Egocentric Video Generation [[PDF](https://arxiv.org/abs/2411.08380), [Page](https://egovid.github.io/)]\n\n[arxiv 2024.11] FlipSketch: Flipping Static Drawings to Text-Guided Sketch Animations [[PDF](https://arxiv.org/abs/2411.10818), [Page](https://github.com/hmrishavbandy/FlipSketch)]\n\n[arxiv 2024.11] PhysMotion: Physics-Grounded Dynamics From a Single Image [[PDF](https://arxiv.org/abs/2411.17189),[Page](https://supertan0204.github.io/physmotion_website/)] \n\n[arxiv 2024.11] InTraGen: Trajectory-controlled Video Generation for Object Interactions [[PDF](https://arxiv.org/abs/2411.16804),[Page](https://github.com/insait-institute/InTraGen)] ![Code](https://img.shields.io/github/stars/insait-institute/InTraGen?style=social&label=Star)\n\n[arxiv 2024.12] MatchDiffusion:Training-free Generation of Match-Cuts [[PDF](https://arxiv.org/abs/2411.18677),[Page](https://matchdiffusion.github.io/)] ![Code](https://img.shields.io/github/stars/PardoAlejo/MatchDiffusion?style=social&label=Star)\n\n[arxiv 2024.12]  Instructional Video Generation [[PDF](https://arxiv.org/pdf/2412.04189),[Page](https://excitedbutter.github.io/Instructional-Video-Generation/)] \n\n[arxiv 2024.12] InfiniCube: Unbounded and Controllable Dynamic 3D Driving Scene Generation with World-Guided Video Models [[PDF](https://arxiv.org/abs/2412.03934),[Page](https://research.nvidia.com/labs/toronto-ai/infinicube/)] \n\n[arxiv 2024.12] Video Creation by Demonstration [[PDF](https://arxiv.org/abs/2412.09551),[Page](https://delta-diffusion.github.io/)] \n\n[arxiv 2024.12] InterDyn: Controllable Interactive Dynamics with Video Diffusion Models [[PDF](https://arxiv.org/abs/2412.11785),[Page](https://interdyn.is.tue.mpg.de/)] \n\n[arxiv 2025.01] TransPixar: Advancing Text-to-Video Generation with Transparency [[PDF](https://arxiv.org/abs/2501.03006),[Page](https://wileewang.github.io/TransPixar/)] ![Code](https://img.shields.io/github/stars/wileewang/TransPixar?style=social&label=Star)\n\n[arxiv 2025.01] Cosmos World Foundation Model Platform for Physical AI [[PDF](https://arxiv.org/abs/2501.03575),[Page](https://www.nvidia.com/en-us/ai/cosmos/)] ![Code](https://img.shields.io/github/stars/NVIDIA/Cosmos?style=social&label=Star)\n\n[arxiv 2025.01] SynthLight: Portrait Relighting with Diffusion Model by Learning to Re-render Synthetic Faces [[PDF](https://arxiv.org/abs/2501.09756),[Page](https://vrroom.github.io/synthlight/)] \n\n[arxiv 2025.01] VanGogh: A Unified Multimodal Diffusion-based Framework for Video Colorization [[PDF](https://arxiv.org/abs/2501.09499),[Page](https://becauseimbatman0.github.io/VanGogh)] ![Code](https://img.shields.io/github/stars/BecauseImBatman0/VanGogh?style=social&label=Star)\n\n[arxiv 2025.01] RelightVid: Temporal-Consistent Diffusion Model for Video Relighting [[PDF](https://arxiv.org/abs/2501.16330)]\n\n[arxiv 2025.02] Mobius: Text to Seamless Looping Video Generation via Latent Shift  [[PDF](https://arxiv.org/abs/2502.20307),[Page](https://mobius-diffusion.github.io/)] ![Code](https://img.shields.io/github/stars/YisuiTT/Mobius?style=social&label=Star)\n\n[arxiv 2025.03]  TASTE-Rob: Advancing Video Generation of Task-Oriented Hand-Object Interaction for Generalizable Robotic Manipulation [[PDF](https://arxiv.org/pdf/2503.11423),[Page](https://taste-rob.github.io/)]\n\n[arxiv 2025.04]  Every Painting Awakened: A Training-free Framework for Painting-to-Animation Generation [[PDF](https://arxiv.org/abs/2503.23736),[Page](https://painting-animation.github.io/animation/)] ![Code](https://img.shields.io/github/stars/lingyuliu/Every-Painting-Awakened?style=social&label=Star)\n\n[arxiv 2025.04] VideoScene: Distilling Video Diffusion Model to Generate 3D Scenes in One Step  [[PDF](https://arxiv.org/abs/2504.01956),[Page](https://hanyang-21.github.io/VideoScene)] ![Code](https://img.shields.io/github/stars/hanyang-21/VideoScene?style=social&label=Star)\n\n[arxiv 2025.04]  Scene Splatter: Momentum 3D Scene Generation from Single Image with Video Diffusion Model [[PDF](https://arxiv.org/abs/2504.02764),[Page](https://shengjun-zhang.github.io/SceneSplatter/)] \n\n[arxiv 2025.04]  Aligning Anime Video Generation with Human Feedback [[PDF](https://arxiv.org/pdf/2504.10044)]\n\n[arxiv 2025.06] LayerFlow: A Unified Model for Layer-aware Video Generation  [[PDF](https://arxiv.org/abs/2506.04228),[Page](https://sihuiji.github.io/LayerFlow-Page/)] ![Code](https://img.shields.io/github/stars/SihuiJi/LayerFlow?style=social&label=Star)\n\n[arxiv 2025.08]  ToonComposer: Streamlining Cartoon Production with Generative Post-Keyframing [[PDF](https://arxiv.org/abs/),[Page](https://lg-li.github.io/project/tooncomposer)] ![Code](https://img.shields.io/github/stars/TencentARC/ToonComposer?style=social&label=Star)\n\n[arxiv 2025.08] CineTrans: Learning to Generate Videos with Cinematic Transitions via Masked Diffusion Models [[PDF](https://arxiv.org/abs/2508.11484),[Page](https://uknowsth.github.io/CineTrans/)] ![Code](https://img.shields.io/github/stars/UknowSth/CineTrans?style=social&label=Star)\n\n[arxiv 2025.09] CamPVG: Camera-Controlled Panoramic Video Generation with Epipolar-Aware Diffusion  [[PDF](https://arxiv.org/abs/2509.19979)]\n\n[arxiv 2025.09] ControlHair: Physically-based Video Diffusion for Controllable Dynamic Hair Rendering  [[PDF](https://arxiv.org/abs/2509.21541),[Page](https://ctrlhair-arxiv.netlify.app/)] \n\n[arxiv 2025.10]  Code2Video: A Code-centric Paradigm for Educational Video Generation [[PDF](https://arxiv.org/abs/2510.01174),[Page](https://showlab.github.io/Code2Video/)] ![Code](https://img.shields.io/github/stars/showlab/Code2Video/?style=social&label=Star)\n\n[arxiv 2025.10] Paper2Video: Automatic Video Generation from Scientific Papers [[PDF](https://arxiv.org/abs/2510.05096),[Page](https://github.com/showlab/Paper2Video)] ![Code](https://img.shields.io/github/stars/showlab/Paper2Video?style=social&label=Star)\n\n[arxiv 2025.10] Generating Human Motion Videos using a Cascaded Text-to-Video Framework  [[PDF](https://arxiv.org/abs/2510.03909),[Page](https://hyelinnam.github.io/Cameo/)] ![Code](https://img.shields.io/github/stars/HyelinNAM/Cameo?style=social&label=Star)\n\n[arxiv 2025.11] RelightMaster: Precise Video Relighting with Multi-plane Light Images  [[PDF](https://arxiv.org/abs/2511.06271),[Page](https://wkbian.github.io/Projects/RelightMaster/)] \n\n[arxiv 2025.11] EgoControl: Controllable Egocentric Video Generation via 3D Full-Body Poses  [[PDF](https://arxiv.org/abs/2511.18173),[Page](https://cvg-bonn.github.io/EgoControl/)]\n\n[arxiv 2025.12]  WorldWander: Bridging Egocentric and Exocentric Worlds in Video Generation [[PDF](https://arxiv.org/pdf/2511.22098),[Page](https://github.com/showlab/WorldWander)] ![Code](https://img.shields.io/github/stars/showlab/WorldWander?style=social&label=Star)\n\n[arxiv 2025.12] SpriteHand: Real-Time Versatile Hand-Object Interaction with Autoregressive Video Generation  [[PDF](https://arxiv.org/pdf/2512.01960)]\n\n[arxiv 2025.12] UniMo: Unifying 2D Video and 3D Human Motion with an Autoregressive Framework  [[PDF](https://arxiv.org/abs/2512.03918),[Page](https://carlyx.github.io/UniMo/)] \n\n[arxiv 2025.12] StereoWorld: Geometry-Aware Monocular-to-Stereo Video Generation  [[PDF](https://arxiv.org/abs/2512.09363),[Page](https://ke-xing.github.io/StereoWorld/)] \n\n[arxiv 2025.12] What Happens Next? Next Scene Prediction with a Unified Video Model  [[PDF](https://arxiv.org/abs/2512.13015),[Page](https://nextsceneprediction.github.io/)] \n\n[arxiv 2025.12] EasyOmnimatte: Taming Pretrained Inpainting Diffusion Models for End-to-End Video Layered Decomposition  [[PDF](https://arxiv.org/abs/2512.21865),[Page](https://yihanhu-2022.github.io/easyomnimatte-project/)] ![Code](https://img.shields.io/github/stars/GVCLab/EasyOmnimatte?style=social&label=Star)\n\n[arxiv 2026.01]  DreamLoop: Controllable Cinemagraph Generation from a Single Photograph [[PDF](https://arxiv.org/abs/2601.02646),[Page](https://anime26398.github.io/dreamloop.github.io/)] \n\n[arxiv 2026.01]  CoMoVi: Co-Generation of 3D Human Motions and Realistic Videos [[PDF](https://arxiv.org/abs/2601.10632),[Page](https://igl-hkust.github.io/CoMoVi/)] ![Code](https://img.shields.io/github/stars/IGL-HKUST/CoMoVi?style=social&label=Star)\n\n[arxiv 2026.03]  OmniLottie: Generating Vector Animations via Parameterized Lottie Tokens [[PDF](https://arxiv.org/abs/2603.02138),[Page](https://openvglab.github.io/OmniLottie/)] ![Code](https://img.shields.io/github/stars/OpenVGLab/OmniLottie?style=social&label=Star)\n\n[arxiv 2026.03] CubeComposer: Spatio-Temporal Autoregressive 4K 360° Video Generation from Perspective Video  [[PDF](https://arxiv.org/abs/2603.04291),[Page](https://lg-li.github.io/project/cubecomposer/)] ![Code](https://img.shields.io/github/stars/TencentARC/CubeComposer?style=social&label=Star)\n\n[arxiv 2026.03] WildDepth: A Multimodal Dataset for 3D Wildlife Perception and Depth Estimation  [[[PDF](https://arxiv.org/abs/2603.16816),[Page](https://yunshin.github.io/WildDepth/)]]\n\n[arxiv 2026.03] Foveated Diffusion: Efficient Spatially Adaptive Image and Video Generation  [[PDF](https://arxiv.org/abs/2603.23491),[Page](https://bchao1.github.io/foveated-diffusion/)] \n\n[arxiv 2026.03] FlashSign: Pose-Free Guidance for Efficient Sign Language Video Generation  [[PDF](https://arxiv.org/abs/2603.27915)]\n\n[arxiv 2026.03] OmniRoam: World Wandering via Long-Horizon Panoramic Video Generation  [[PDF](https://arxiv.org/abs/2603.30045),[Page](https://github.com/yuhengliu02/OmniRoam)] ![Code](https://img.shields.io/github/stars/yuhengliu02/OmniRoam?style=social&label=Star)\n\n[arxiv 2026.03]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)\n\n\n## Concept \n[arxiv 2023.07]Animate-A-Story: Storytelling with Retrieval-Augmented Video Generation [[PDF](https://arxiv.org/abs/2307.06940), [Page](https://videocrafter.github.io/Animate-A-Story)]\n\n[arxiv 2023.11]VideoDreamer: Customized Multi-Subject Text-to-Video Generation with Disen-Mix Finetuning[[PDF](https://arxiv.org/pdf/%3CARXIV%20PAPER%20ID%3E.pdf),[Page](https://videodreamer23.github.io/)]\n\n[arxiv 2023.12]VideoAssembler: Identity-Consistent Video Generation with Reference Entities using Diffusion Model [[PDF](https://arxiv.org/abs/2311.17338),[Page](https://gulucaptain.github.io/videoassembler/)]\n\n[arxiv 2023.12]VideoBooth: Diffusion-based Video Generation with Image Prompts [[PDF](https://arxiv.org/abs/2312.00777),[Page](https://vchitect.github.io/VideoBooth-project/)]\n\n[arxiv 2023.12]DreamVideo: Composing Your Dream Videos with Customized Subject and Motion [[PDF](https://arxiv.org/abs/2312.04433),[Page](https://dreamvideo-t2v.github.io/)]\n\n[arxiv 2023.12]PIA: Your Personalized Image Animator via Plug-and-Play Modules in Text-to-Image Models [[PDF](https://pi-animator.github.io/)]\n\n[arxiv 2024.01]CustomVideo: Customizing Text-to-Video Generation with Multiple Subjects [[PDF](https://arxiv.org/abs/2401.09962)]\n\n[arxiv 2024.02]Magic-Me: Identity-Specific Video Customized Diffusion [[PDf](https://arxiv.org/abs/2402.09368),[Page](https://magic-me-webpage.github.io/)]\n\n[arxiv 2024.03]EVA: Zero-shot Accurate Attributes and Multi-Object Video Editing [[PDF](https://arxiv.org/abs/2403.16111),[Page](https://knightyxp.github.io/EVA/)]\n\n[arxiv 2024.04]AniClipart: Clipart Animation with Text-to-Video Priors [[PDF](https://arxiv.org/abs/2404.12347),[Page](https://aniclipart.github.io/)]\n\n[arxiv 2024.04]ID-Animator: Zero-Shot Identity-Preserving Human Video Generation [[PDF](),[Page](https://id-animator.github.io/)]\n\n[arxiv 2024.07]Still-Moving: Customized Video Generation without Customized Video Data [[PDF](https://arxiv.org/abs/2407.08674),[Page](https://still-moving.github.io/)]\n\n[arxiv 2024.08] CustomCrafter: Customized Video Generation with Preserving Motion and Concept Composition Abilities[[PDF](https://arxiv.org/abs/2408.13239),[Page](https://customcrafter.github.io/)]\n\n[arxiv 2024.10] TweedieMix: Improving Multi-Concept Fusion for Diffusion-based Image/Video Generation [[PDF](https://arxiv.org/abs/2410.05591),[Page](https://github.com/KwonGihyun/TweedieMix)]\n\n[arxiv 2024.10] PersonalVideo: High ID-Fidelity Video Customization With Static Images [[PDF](https://openreview.net/pdf?id=ndtFyx7UWs),[Page](https://personalvideo.github.io/)]\n\n[arxiv 2024.10] DreamVideo-2: Zero-Shot Subject-Driven Video Customization with Precise Motion Control [[PDF](https://arxiv.org/abs/2410.13830),[Page](https://dreamvideo2.github.io/)]\n\n[arxiv 2024.12]  MotionCharacter: Identity-Preserving and Motion Controllable Human Video Generation [[PDF](https://arxiv.org/abs/2411.18281),[Page](https://motioncharacter.github.io/)]\n\n[arxiv 2024.12] Multi-Shot Character Consistency for Text-to-Video Generation  [[PDF](https://arxiv.org/abs/2412.07750),[Page](https://research.nvidia.com/labs/par/video_storyboarding)] \n\n[arxiv 2024.12] LoRACLR: Contrastive Adaptation for Customization of Diffusion Models  [[PDF](https://arxiv.org/abs/2412.09622),[Page](https://loraclr.github.io/)]\n\n[arxiv 2024.12] CustomTTT: Motion and Appearance Customized Video Generation via Test-Time Training  [[PDF](https://arxiv.org/abs/2412.15646),[Page](https://github.com/RongPiKing/CustomTTT)] ![Code](https://img.shields.io/github/stars/RongPiKing/CustomTTT?style=social&label=Star)\n\n[arxiv 2025.01] VideoMaker: Zero-shot Customized Video Generation with the Inherent Force of Video Diffusion Models  [[PDF](https://arxiv.org/abs/2412.19645),[Page](https://wutao-cs.github.io/VideoMaker/)] \n\n[arxiv 2025.01] Magic Mirror: ID-Preserved Video Generation in Video Diffusion Transformers  [[PDF](https://arxiv.org/abs/2501.03931),[Page](https://julianjuaner.github.io/projects/MagicMirror/)] ![Code](https://img.shields.io/github/stars/dvlab-research/MagicMirror/?style=social&label=Star)\n\n[arxiv 2025.01] ConceptMaster: Multi-Concept Video Customization on Diffusion Transformer Models Without Test-Time Tuning  [[PDF](https://arxiv.org/abs/2501.04698),[Page](https://yuzhou914.github.io/ConceptMaster/)] \n\n[arxiv 2025.01] Multi-subject Open-set Personalization in Video Generation  [[PDF](https://arxiv.org/abs/2501.06187),[Page](https://snap-research.github.io/open-set-video-personalization/)] \n\n[arxiv 2025.01] EchoVideo: Identity-Preserving Human Video Generation by Multimodal Feature Fusion  [[PDF](https://arxiv.org/abs/2501.13452)]\n\n[arxiv 2025.02] Movie Weaver: Tuning-Free Multi-Concept Video Personalization with Anchored Prompts  [[PDF](https://jeff-liangf.github.io/projects/movieweaver/Movie_Weaver.pdf),[Page](https://jeff-liangf.github.io/projects/movieweaver/)] \n\n[arxiv 2025.02] Phantom: Subject-consistent video generation via cross-modal alignment  [[PDF](https://phantom-video.github.io/Phantom/),[Page](https://github.com/Phantom-video/Phantom)] ![Code](https://img.shields.io/github/stars/Phantom-video/Phantom?style=social&label=Star)\n\n[arxiv 2025.02] FantasyID: Face Knowledge Enhanced ID-Preserving Video Generation  [[PDF](https://arxiv.org/abs/2502.13995),[Page](https://fantasy-amap.github.io/fantasy-id/)] ![Code](https://img.shields.io/github/stars/Fantasy-AMAP/fantasy-id?style=social&label=Star)\n\n[arxiv 2025.03] CINEMA: Coherent Multi-Subject Video Generation via MLLM-Based Guidance  [[PDF](https://arxiv.org/pdf/2503.10391),[Page](https://ml-gsai.github.io/Concat-ID-demo/)] ![Code](https://img.shields.io/github/stars/ML-GSAI/Concat-ID?style=social&label=Star)\n\n[arxiv 2025.03] Concat-ID: Towards Universal Identity-Preserving Video Synthesis  [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)\n\n[arxiv 2025.03]  VideoMage: Multi-Subject and Motion Customization of Text-to-Video Diffusion Models [[PDF](https://arxiv.org/abs/2503.21781),[Page](https://jasper0314-huang.github.io/videomage-customization)] !\n\n[arxiv 2025.03] FullDiT: Multi-Task Video Generative Foundation Model with Full Attention  [[PDF](https://arxiv.org/pdf/2503.19907v1),[Page](https://fulldit.github.io/)] ![Code]\n\n[arxiv 2025.04] JointTuner: Appearance-Motion Adaptive Joint Training for Customized Video Generation [[PDF](https://arxiv.org/abs/2503.23951),[Page](https://fdchen24.github.io/JointTuner-Website/)]\n\n[arxiv 2025.04] SkyReels-A2: Compose Anything in Video Diffusion Transformers  [[PDF](https://arxiv.org/abs/2504.02436),[Page](https://github.com/SkyworkAI/SkyReels-A2)] ![Code](https://img.shields.io/github/stars/SkyworkAI/SkyReels-A2?style=social&label=Star)\n\n[arxiv 2025.04] Subject-driven Video Generation via Disentangled Identity and Motion  [[PDF](https://arxiv.org/abs/2504.17816),[Page](https://carpedkm.github.io/projects/disentangled_sub/index.html)] ![Code](https://img.shields.io/github/stars/carpedkm/disentangled-subject-to-vid?style=social&label=Star)\n\n[arxiv 2025.05]  HunyuanCustom: A Multimodal-Driven Architecture for Customized Video Generation [[PDF](https://arxiv.org/pdf/2505.04512),[Page](https://hunyuancustom.github.io/)] ![Code](https://img.shields.io/github/stars/Tencent/HunyuanCustom?style=social&label=Star)\n\n[arxiv 2025.06] HunyuanVideo-HOMA: Generic Human-Object Interaction in Multimodal Driven Human Animation  [[PDF](https://arxiv.org/pdf/2506.08797),[Page](https://anonymous.4open.science/w/homa-page-0FBE/)] ![Code](https://img.shields.io/github/stars/?style=social&label=Star)\n\n[arxiv 2025.07] Proteus-ID: ID-Consistent and Motion-Coherent Video Customization  [[PDF](https://arxiv.org/pdf/2506.23729),[Page](https://grenoble-zhang.github.io/Proteus-ID/)] ![Code](https://img.shields.io/github/stars/grenoble-zhang/Proteus-ID?style=social&label=Star)\n\n[arxiv 2025.07] Identity-Preserving Text-to-Video Generation Guided by Simple yet Effective Spatial-Temporal Decoupled Representations  [[PDF](https://arxiv.org/pdf/2507.04705)]\n\n[arxiv 2025.08] LaVieID: Local Autoregressive Diffusion Transformers for Identity-Preserving Video Creation  [[PDF](https://arxiv.org/abs/2508.07603),[Page](https://github.com/ssugarwh/LaVieID)] ![Code](https://img.shields.io/github/stars/ssugarwh/LaVieID?style=social&label=Star)\n\n[arxiv 2025.08] Stand-In: A Lightweight and Plug-and-Play Identity Control for Video Generation  [[PDF](https://arxiv.org/abs/2508.07901),[Page](https://arxiv.org/abs/2508.07901)] ![Code](https://img.shields.io/github/stars/WeChatCV/Stand-In?style=social&label=Star)\n\n[arxiv 2025.09]  Identity-Preserving Text-to-Video Generation via Training-Free Prompt, Image, and Guidance Enhancement [[PDF](https://arxiv.org/abs/2509.01362),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)\n\n[arxiv 2025.09]  HuMo: Human-Centric Video Generation via Collaborative Multi-Modal Conditioning [[PDF](https://arxiv.org/abs/2509.08519),[Page](https://phantom-video.github.io/HuMo/)] ![Code](https://img.shields.io/github/stars/Phantom-video/HuMo?style=social&label=Star)\n\n[arxiv 2025.10] BindWeave: Subject-Consistent Video Generation via Cross-Modal Integration [[PDF](https://arxiv.org/abs/2510.00438)]\n\n[arxiv 2025.10]  Continual Personalization for Diffusion Models [[PDF](https://arxiv.org/abs/2510.02296)]\n\n[arxiv 2025.10] Character Mixing for Video Generation  [[PDF](https://arxiv.org/pdf/2510.05093),[Page](https://tingtingliao.github.io/mimix/)] ![Code](https://img.shields.io/github/stars/TingtingLiao/mimix?style=social&label=Star)\n\n[arxiv 2025.10]  Kaleido: Open-Sourced Multi-Subject Reference Video Generation Model [[PDF](https://arxiv.org/abs/2510.18573),[Page](https://github.com/CriliasMiller/Kaleido-OpenSourced)] ![Code](https://img.shields.io/github/stars/CriliasMiller/Kaleido-OpenSourced?style=social&label=Star)\n\n[arxiv 2025.10]  BachVid: Training-Free Video Generation with Consistent Background and Character [[PDF](https://arxiv.org/abs/2510.21696),[Page](https://wolfball.github.io/bachvid)] ![Code](https://img.shields.io/github/stars/wolfball/BachVid?style=social&label=Star)\n\n[arxiv 2025.11]  ID-Composer: Multi-Subject Video Synthesis with Hierarchical Identity Preservation [[PDF](https://arxiv.org/abs/2511.00511)]\n\n[arxiv 2025.11] BindWeave: Subject-Consistent Video Generation via Cross-Modal Integration  [[PDF](https://arxiv.org/pdf/2510.00438),[Page](https://lzy-dot.github.io/BindWeave/)] ![Code](https://img.shields.io/github/stars/bytedance/BindWeave?style=social&label=Star)\n\n[arxiv 2025.11]  First Frame Is the Place to Go for Video Content Customization [[PDF](https://arxiv.org/abs/2511.15700),[Page](https://firstframego.github.io/)] ![Code](https://img.shields.io/github/stars/zli12321/FFGO-Video-Customization?style=social&label=Star)\n\n[arxiv 2025.12]  MoFu: Scale-Aware Modulation and Fourier Fusion for Multi-Subject Video Generation [[PDF](https://arxiv.org/abs/2512.22310)]\n\n[arxiv 2026.01] Slot-ID: Identity-Preserving Video Generation from Reference Videos via Slot-Based Temporal Identity Encoding  [[PDF](https://arxiv.org/pdf/2601.01352)]\n\n[arxiv 2026.03] WildActor Unconstrained Identity-Preserving Video Generation [[PDF](https://wildactor.github.io/#),[Page](https://wildactor.github.io/)] ![Code](https://img.shields.io/github/stars/MeiGen-AI/WildActor?style=social&label=Star)\n\n[arxiv 2026.03] DreamVideo-Omni: Omni-Motion Controlled Multi-Subject Video Customization with Latent Identity Reinforcement Learning  [[PDF](https://arxiv.org/abs/2603.12257),[Page](https://dreamvideo-omni.github.io)]\n\n[arxiv 2026.03] 3DreamBooth: High-Fidelity 3D Subject-Driven Video Generation Model  [[PDF](https://arxiv.org/abs/2603.18524),[Page](https://ko-lani.github.io/3DreamBooth)] ![Code](https://img.shields.io/github/stars/Ko-Lani/3DreamBooth?style=social&label=Star)\n\n[arxiv 2026.03] LumosX: Relate Any Identities with Their Attributes for Personalized Video Generation [[[PDF](https://arxiv.org/abs/2603.20192), [Page](https://jiazheng-xing.github.io/lumosx-home/)]] ![Code](https://img.shields.io/github/stars/alibaba-damo-academy/Lumos-Custom?style=social&label=Star)\n\n[arxiv 2026.03] Identity-Consistent Video Generation under Large Facial-Angle Variations  [[PDF](https://arxiv.org/abs/2603.21299)]\n\n[arxiv 2026.03] RefAlign: Representation Alignment for Reference-to-Video Generation  [[PDF](https://arxiv.org/abs/2603.25743),[Page](https://github.com/gudaochangsheng/RefAlign)] ![Code](https://img.shields.io/github/stars/gudaochangsheng/RefAlign?style=social&label=Star)\n\n[arxiv 2026.03] AnyID: Ultra-Fidelity Universal Identity-Preserving Video Generation from Any Visual References  [[PDF](https://arxiv.org/abs/2603.25188),[Page](https://johnneywang.github.io/AnyID-webpage/)] \n\n[arxiv 2026.03]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)\n\n\n## multi-view \n[arxiv 2026.01] MV-S2V: Multi-View Subject-Consistent Video Generation  [[PDF](https://arxiv.org/abs/2601.17756),[Page](https://szy-young.github.io/mv-s2v/)] \n\n[arxiv 2026.04] Action Images: End-to-End Policy Learning via Multiview Video Generation  [[PDF](https://arxiv.org/abs/2604.06168),[Page](https://actionimages.github.io/)] ![Code](https://img.shields.io/github/stars/UMass-Embodied-AGI/ActionImages?style=social&label=Star)\n\n\n[arxiv 2026.03]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)\n\n\n## relation \n[arxiv 2025.03]  DreamRelation: Relation-Centric Video Customization [[PDF](https://arxiv.org/abs/2503.07602),[Page](https://dreamrelation.github.io/)] \n\n\n[arxiv 2026.03]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)\n\n\n\n## Talking Face \n[arxiv 2024.02]EMO Emote Portrait Alive: Generating Expressive Portrait Videos with Audio2Video Diffusion Model under Weak Conditions [[PDF](https://arxiv.org/abs/2402.17485),[Page](https://humanaigc.github.io/emote-portrait-alive/)]\n\n[arxiv 2024.04] VASA-1: Lifelike Audio-Driven Talking Faces Generated in Real Time [[PDF](https://arxiv.org/abs/2404.10667),[Page](https://www.microsoft.com/en-us/research/project/vasa-1/)]\n\n[arxiv 2024.04]MuseTalk: Real-Time High Quality Lip Synchronization with Latent Space Inpainting[[PDF](),[Page](https://github.com/TMElyralab/MuseTalk)]\n\n[arxiv 2024.06]V-Express: Conditional Dropout for Progressive Training of Portrait Video Generation[[PDF](https://arxiv.org/abs/2406.01900),[Page](https://github.com/tencent-ailab/V-Express)]\n\n[arxiv 2024.06]Follow-Your-Emoji: Fine-Controllable and Expressive Freestyle Portrait Animation[[PDF](),[Page](https://follow-your-emoji.github.io/)]\n\n[arxiv 2024.06] X-Portrait: Expressive Portrait Animation with Hierarchical Motion Attention [[PDF](https://arxiv.org/abs/2403.15931),[Page](https://github.com/bytedance/X-Portrait)]\n\n\n[arxiv 2024.09] CyberHost: Taming Audio-driven Avatar Diffusion Model with Region Codebook Attention[[PDF](https://arxiv.org/pdf/2409.01876),[Page](https://cyberhost.github.io/)]\n\n[arxiv 2024.09] SVP: Style-Enhanced Vivid Portrait Talking Head Diffusion Model [[PDF](https://arxiv.org/abs/2409.03270)]\n\n[arxiv 2024.09] Loopy: Taming Audio-Driven Portrait Avatar with Long-Term Motion Dependency [[PDF](https://arxiv.org/pdf/2409.02634),[Page](https://loopyavatar.github.io/)]\n\n[arxiv 2024.09] DiffTED: One-shot Audio-driven TED Talk Video Generation with Diffusion-based Co-speech Gestures [[PDF](https://arxiv.org/abs/2409.07649)]\n\n[arxiv 2024.09] Stable Video Portraits [[PDF](https://arxiv.org/abs/2409.18083),[Page](https://svp.is.tue.mpg.de/)]\n\n[arxiv 2024.09] Portrait Video Editing Empowered by Multimodal Generative Priors [[PDF](https://arxiv.org/abs/2409.13591),[Page](https://ustc3dv.github.io/PortraitGen/)]\n\n[arxiv 2024.10] Hallo2: Long-Duration and High-Resolution Audio-Driven Portrait Image Animation [[PDF](https://arxiv.org/abs/2410.07718),[Page]()]\n\n[arxiv 2024.10] DAWN: Dynamic Frame Avatar with Non-autoregressive Diffusion Framework for Talking Head Video Generation [[PDF](https://hanbo-cheng.github.io/DAWN/),[Page](https://hanbo-cheng.github.io/DAWN/)]\n\n[arxiv 2024.12] HelloMeme: Integrating Spatial Knitting Attentions to Embed High-Level andFidelity-Rich Conditions in Diffusion Models  [[PDF](),[Page](https://songkey.github.io/hellomeme/)] ![Code](https://img.shields.io/github/stars/HelloVision/HelloMeme?style=social&label=Star)\n\n[arxiv 2024.10] Takin-ADA: Emotion Controllable Audio-Driven Animation with Canonical and Landmark Loss Optimization [[PDF](https://arxiv.org/pdf/2410.14283)]\n\n[arxiv 2024.11] X-Portrait 2: Highly Expressive Portrait Animation [[PDF](),[Page](https://byteaigc.github.io/X-Portrait2/)]\n\n[arxiv 2024.11] EchoMimicV2: Towards Striking, Simplified, and Semi-Body Human Animation[[PDF](https://arxiv.org/abs/2411.10061),[Page](https://antgroup.github.io/ai/echomimic_v2/)]\n\n[arxiv 2024.11]  ConsistentAvatar: Learning to Diffuse Fully Consistent Talking Head Avatar with Temporal Guidance [[PDF](https://arxiv.org/abs/2411.15436)] \n\n[arxiv 2024.11] LetsTalk: Latent Diffusion Transformer for Talking Video Synthesis  [[PDF](https://arxiv.org/abs/2411.16748),[Page](https://zhang-haojie.github.io/project-pages/letstalk.html)] ![Code](https://img.shields.io/github/stars/zhang-haojie/letstalk?style=social&label=Star)\n\n[arxiv 2024.11] EmotiveTalk: Expressive Talking Head Generation through Audio Information Decoupling and Emotional Video Diffusion  [[PDF](https://arxiv.org/abs/2411.16726)] \n\n[arxiv 2024.11] Sonic: Shifting Focus to Global Audio Perception in Portrait Animation  [[PDF](https://arxiv.org/pdf/2411.16331),[Page](https://jixiaozhong.github.io/Sonic/)] ![Code](https://img.shields.io/github/stars/jixiaozhong/Sonic?style=social&label=Star)\n\n[arxiv 2024.12] EmojiDiff: Advanced Facial Expression Control with High Identity Preservation in Portrait Generation  [[PDF](https://arxiv.org/abs/2412.01254),[Page](https://emojidiff.github.io/)] \n\n[arxiv 2024.12]  FLOAT Generative Motion Latent Flow Matching for Audio-driven Talking Portrait [[PDF](https://arxiv.org/abs/2412.01064),[Page](https://deepbrainai-research.github.io/float/)] \n\n[arxiv 2024.12]  Hallo3: Highly Dynamic and Realistic Portrait Image Animation with Diffusion Transformer Networks [[PDF](https://arxiv.org/pdf/2412.00733),[Page](https://github.com/fudan-generative-vision/hallo3)] ![Code](https://img.shields.io/github/stars/fudan-generative-vision/hallo3?style=social&label=Star)\n\n[arxiv 2024.12] MEMO: Memory-Guided Diffusion for Expressive Talking Video Generation  [[PDF](https://arxiv.org/abs/2412.04448),[Page](https://memoavatar.github.io/)] ![Code](https://img.shields.io/github/stars/memoavatar/memo?style=social&label=Star)\n\n[arxiv 2024.12] PortraitTalk: Towards Customizable One-Shot Audio-to-Talking Face Generation  [[PDF](https://arxiv.org/abs/2412.07754)]\n\n[arxiv 2024.12] CAP4D: Creating Animatable 4D Portrait Avatars with Morphable Multi-View Diffusion Models  [[PDF](https://arxiv.org/abs/2412.12093),[Page](https://felixtaubner.github.io/cap4d/)] ![Code](https://img.shields.io/github/stars/felixtaubner/cap4d/?style=social&label=Star)\n\n[arxiv 2024.12]  VividFace: A Diffusion-Based Hybrid Framework for High-Fidelity Video Face Swapping [[PDF](https://arxiv.org/abs/2403.16999),[Page](https://hao-shao.com/projects/vividface.html)] ![Code](https://img.shields.io/github/stars/deepcs233/VividFace?style=social&label=Star)\n\n[arxiv 2024.12] OSA-LCM: Real-time One-Step Diffusion-based Expressive Portrait Videos Generation  [[PDF](http://arxiv.org/abs/2412.13479),[Page](https://guohanzhong.github.io/osalcm/)] ![Code](https://img.shields.io/github/stars/Guohanzhong/OSA-LCM?style=social&label=Star)\n\n[arxiv 2024.12] INFP: Audio-Driven Interactive Head Generation in Dyadic Conversations  [[PDF](https://www.arxiv.org/pdf/2412.04037),[Page](https://grisoon.github.io/INFP/)]\n\n[arxiv 2025.01]  JoyGen: Audio-Driven 3D Depth-Aware Talking-Face Video Editing [[PDF](https://arxiv.org/abs/2501.01798),[Page](https://joy-mm.github.io/JoyGen/)] ![Code](https://img.shields.io/github/stars/JOY-MM/JoyGen?style=social&label=Star)\n\n[arxiv 2025.02]  Long-Term TalkingFace Generation via Motion-Prior Conditional Diffusion Model [[PDF](https://arxiv.org/pdf/2502.09533)]\n\n[arxiv 2025.02] SayAnything: Audio-Driven Lip Synchronization with Conditional Video Diffusion  [[PDF](https://arxiv.org/pdf/2502.11515)]\n\n[arxiv 2025.02] SkyReels-A1: Expressive Portrait Animation in Video Diffusion Transformer  [[PDF](https://arxiv.org/abs/2502.10841),[Page](https://skyworkai.github.io/skyreels-a1.github.io/)] ![Code](https://img.shields.io/github/stars/SkyworkAI/SkyReels-A1?style=social&label=Star)\n\n[arxiv 2025.02]  AV-Flow: Transforming Text to Audio-Visual Human-like Interactions [[PDF](https://arxiv.org/abs/2502.13133),[Page](https://aggelinacha.github.io/AV-Flow/)] \n\n[arxiv 2025.02] InsTaG: Learning Personalized 3D Talking Head from Few-Second Video  [[PDF](https://arxiv.org/abs/2502.20387),[Page](https://fictionarry.github.io/InsTaG/)] ![Code](https://img.shields.io/github/stars/Fictionarry/InsTaG?style=social&label=Star)\n\n[arxiv 2025.02] ARTalk: Speech-Driven 3D Head Animation via Autoregressive Model  [[PDF](https://arxiv.org/abs/2502.20323),[Page](https://xg-chu.site/project_artalk/)] \n\n[arxiv 2025.02]  High-Fidelity Relightable Monocular Portrait Animation with Lighting-Controllable Video Diffusion Model [[PDF](https://arxiv.org/pdf/2502.19894)]\n\n[arxiv 2025.03] KeyFace: Expressive Audio-Driven Facial Animation for Long Sequences via KeyFrame Interpolation  [[PDF](https://arxiv.org/abs/2503.01715)]\n\n[arxiv 2025.03]  PC-Talk: Precise Facial Animation Control for Audio-Driven Talking Face Generation [[PDF](https://arxiv.org/abs/2503.14295),[Page](https://bq-wang0511.github.io/PC-Talk/)] \n\n[arxiv 2025.03] Cafe-Talk: Generating 3D Talking Face Animation with Multimodal Coarse- and Fine-grained Control  [[PDF](https://arxiv.org/abs/2503.14517),[Page](https://harryxd2018.github.io/cafe-talk/)] \n\n[arxiv 2025.04] Audio-visual Controlled Video Diffusion with Masked Selective State Spaces Modeling for Natural Talking Head Generation  [[PDF](https://arxiv.org/abs/2504.02542),[Page](https://harlanhong.github.io/publications/actalker/index.html)] ![Code](https://img.shields.io/github/stars/harlanhong/ACTalker?style=social&label=Star)\n\n[arxiv 2025.04] OmniTalker: Real-Time Text-Driven Talking Head Generation with In-Context Audio-Visual Style Replication  [[PDF](https://arxiv.org/abs/2504.02433),[Page](https://humanaigc.github.io/omnitalker)] \n\n[arxiv 2025.05] IM-Portrait: Learning 3D-aware Video Diffusion for Photorealistic Talking Heads from Monocular Videos  [[PDF](https://arxiv.org/abs/2504.19165),[Page](https://y-u-a-n-l-i.github.io/projects/IM-Portrait/)] \n\n[arxiv 2025.07] MoDA: Multi-modal Diffusion Architecture for Talking Head Generation  [[PDF](https://arxiv.org/pdf/2507.03256)]\n\n[arxiv 2025.07]  ATL-Diff: Audio-Driven Talking Head Generation with Early Landmarks-Guide Noise Diffusion [[PDF](https://arxiv.org/abs/2507.12804),[Page](https://github.com/sonvth/ATL-Diff)] ![Code](https://img.shields.io/github/stars/sonvth/ATL-Diff?style=social&label=Star)\n\n[arxiv 2025.07]  Livatar-1: Real-Time Talking Heads Generation with Tailored Flow Matching [[PDF](https://arxiv.org/abs/2507.18649),[Page](https://h-liu1997.github.io/Livatar-1/)] \n\n[arxiv 2025.07]  Mask-Free Audio-driven Talking Face Generation for Enhanced Visual Quality and Identity Preservation [[PDF](https://arxiv.org/abs/2507.20953)]\n\n[arxiv 2025.07]  Who is a Better Talker: Subjective and Objective Quality Assessment for AI-Generated Talking Heads [[PDF](https://arxiv.org/pdf/2507.23343),[Page](https://github.com/zyj-2000/Talker)] ![Code](https://img.shields.io/github/stars/zyj-2000/Talker?style=social&label=Star)\n\n[arxiv 2025.08]  RealTalk: Realistic Emotion-Aware Lifelike Talking-Head Synthesis [[PDF](https://arxiv.org/pdf/2508.12163)]\n\n[arxiv 2025.08]  TalkVid: A Large-Scale Diversified Dataset for Audio-Driven Talking Head Synthesis [[PDF](https://arxiv.org/abs/2508.13618),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)\n\n[arxiv 2025.09]  Lip-Synchronized and Emotion-Aware Talking Face Generation via Multi-Modal Emotion Embedding [[PDF](https://arxiv.org/abs/2509.19965),[Page](https://novicemm.github.io/synchrorama/)] ![Code](https://img.shields.io/github/stars/novicemm/synchrorama_?style=social&label=Star)\n\n[arxiv 2025.09] Stable Video-Driven Portraits  [[PDF](https://arxiv.org/pdf/2509.17476)]\n\n[arxiv 2025.09]  Follow-Your-Emoji-Faster: Towards Efficient, Fine-Controllable, and Expressive Freestyle Portrait Animation [[PDF](https://arxiv.org/abs/2509.16630),[Page](https://follow-your-emoji.github.io/)] ![Code](https://img.shields.io/github/stars/mayuelala/FollowYourEmoji?style=social&label=Star)\n\n[arxiv 2025.10]  Audio Driven Real-Time Facial Animation for Social Telepresence [[PDF](https://arxiv.org/abs/2510.01176),[Page](https://jiyewise.github.io/projects/AudioRTA/)] \n\n[arxiv 2025.10] Lookahead Anchoring: Preserving Character Identity in Audio-Driven Human Animation  [[PDF](https://arxiv.org/abs/2510.23581),[Page](https://lookahead-anchoring.github.io/)] ![Code](https://img.shields.io/github/stars/j0seo/lookahead-anchoring?style=social&label=Star)\n\n[arxiv 2025.10] MAGIC-Talk: Motion-aware Audio-Driven Talking Face Generation with Customizable Identity Control  [[PDF](https://arxiv.org/abs/2510.22810)]\n\n[arxiv 2025.12] IMTalker: Efficient Audio-driven Talking Face Generation with Implicit Motion Transfer  [[PDF](https://arxiv.org/abs/XXXX.XXXXX),[Page](https://cbsjtu01.github.io/IMTalker/)] ![Code](https://img.shields.io/github/stars/cbsjtu01/IMTalker?style=social&label=Star)\n\n[arxiv 2025.12] DeX-Portrait: Disentangled and Expressive Portrait Animation via Explicit and Latent Motion Representations  [[PDF](https://arxiv.org/abs/2512.15524v1),[Page](https://syx132.github.io/DeX-Portrait/)] \n\n[arxiv 2025.12] In-Context Audio Control of Video Diffusion Transformers  [[PDF](https://arxiv.org/pdf/2512.18772)]\n\n[arxiv 2025.12]  DyStream: Streaming Dyadic Talking Heads Generation via Flow Matching-based Autoregressive Model [[PDF](https://pub-89a94288e5914c929c18a5b103c5cea0.r2.dev/DyStream.pdf),[Page](https://github.com/RobinWitch/DyStream)] ![Code](https://img.shields.io/github/stars/RobinWitch/DyStream?style=social&label=Star)\n\n[arxiv 2026.01] RSATalker: Realistic Socially-Aware Talking Head Generation for Multi-Turn Conversation  [[PDF](https://arxiv.org/pdf/2601.10606)]\n\n[arxiv 2026.03] ECHO: Towards Emotionally Appropriate and Contextually Aware Interactive Head Generation  [[PDF](https://arxiv.org/abs/2603.17427)]\n\n[arxiv 2026.03] EARTalking: End-to-end GPT-style Autoregressive Talking Head Synthesis with Frame-wise Control  [[PDF](https://arxiv.org/abs/2603.20307)]\n\n\n[arxiv 2026.03]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)\n\n\n## Talking Body\n[arxiv 2024.09] CyberHost: Taming Audio-driven Avatar Diffusion Model with Region Codebook Attention [[PDF](https://arxiv.org/pdf/2409.01876),[Page](https://cyberhost.github.io/)]\n\n[arxiv 2025.01] EMO2: End-Effector Guided Audio-Driven Avatar Video Generation  [[PDF](https://arxiv.org/abs/2501.10687),[Page](https://humanaigc.github.io/emote-portrait-alive-2/)] \n\n[arxiv 2025.02]  OmniHuman-1: Rethinking the Scaling-Up of One-Stage Conditioned Human Animation Models [[PDF](http://arxiv.org/abs/2502.01061),[Page](https://omnihuman-lab.github.io/)] \n\n[arxiv 2025.03] Versatile Multimodal Controls for Whole-Body Talking Human Animation  [[PDF](https://arxiv.org/pdf/2503.08714)]\n\n[arxiv 2025.03] MagicInfinite: Generating Infinite Talking Videos with Your Words and Voice  [[PDF](https://arxiv.org/abs/2503.05978),[Page](https://www.hedra.com/)] \n\n[arxiv 2025.04]  MoCha: Towards Movie-Grade Talking Character Synthesis [[PDF](https://arxiv.org/abs/2503.23307),[Page](https://congwei1230.github.io/MoCha/)] \n\n[arxiv 2025.04]  FantasyTalking: Realistic Talking Portrait Generation via Coherent Motion Synthesis [[PDF](https://arxiv.org/abs/2504.04842),[Page](https://fantasy-amap.github.io/fantasy-talking/)] ![Code](https://img.shields.io/github/stars/Fantasy-AMAP/fantasy-talking?style=social&label=Star)\n\n\n[arxiv 2025.04] DreamActor-M1: Holistic, Expressive and Robust Human Image Animation with Hybrid Guidance  [[PDF](https://arxiv.org/abs/2504.01724),[Page](https://grisoon.github.io/DreamActor-M1/)] \n\n\n[arxiv 2025.05]  HunyuanCustom: A Multimodal-Driven Architecture for Customized Video Generation [[PDF](https://arxiv.org/pdf/2505.04512),[Page](https://hunyuancustom.github.io/)] ![Code](https://img.shields.io/github/stars/Tencent/HunyuanCustom?style=social&label=Star)\n\n[arxiv 2025.05]  Hallo4: High-Fidelity Dynamic Portrait Animation via Direct Preference Optimization and Temporal Motion Modulation [[PDF](https://github.com/xyz123xyz456/hallo4),[Page](https://github.com/xyz123xyz456/hallo4)] ![Code](https://img.shields.io/github/stars/xyz123xyz456/hallo4?style=social&label=Star)\n\n[arxiv 2025.06]  Seeing Voices: Generating A-Roll Video from Audio with Mirage [[PDF](https://arxiv.org/abs/2506.08279),[Page](http://mirage.app/research/seeing-voices)] \n\n[arxiv 2025.06]  HunyuanVideo-Avatar: High-Fidelity Audio-Driven Human Animation for Multiple Characters [[PDF](https://arxiv.org/pdf/2505.20156),[Page](https://hunyuanvideo-avatar.github.io/)] ![Code](https://img.shields.io/github/stars/Tencent-Hunyuan/HunyuanVideo-Avatar?style=social&label=Star)\n\n[arxiv 2025.06]  Let Them Talk: Audio-Driven Multi-Person Conversational Video Generation [[PDF](https://arxiv.org/abs/2505.22647),[Page](https://meigen-ai.github.io/multi-talk/)] ![Code](https://img.shields.io/github/stars/MeiGen-AI/MultiTalk?style=social&label=Star)\n\n[arxiv 2025.06] TalkingMachines:Real-Time Audio-Driven FaceTime-Style Video via Autoregressive Diffusion Models [[PDF](https://arxiv.org/abs/2506.03099),[Page](https://aaxwaz.github.io/TalkingMachines/)] \n\n[arxiv 2025.06] AlignHuman: Improving Motion and Fidelity via Timestep-Segment Preference Optimization for Audio-Driven Human Animation  [[PDF](https://arxiv.org/abs/2506.11144),[Page](https://alignhuman.github.io/)] \n\n[arxiv 2025.06]  OmniAvatar: Efficient Audio-Driven Avatar Video Generation with Adaptive Body Animation [[PDF](https://arxiv.org/abs/2506.18866),[Page](https://omni-avatar.github.io/)] ![Code](https://img.shields.io/github/stars/Omni-Avatar/OmniAvatar?style=social&label=Star)\n\n[arxiv 2025.07] EchoMimicV3: 1.3B Parameters are All You Need for Unified Multi-Modal and Multi-Task Human Animation  [[PDF](https://arxiv.org/pdf/2507.03905)]\n\n[arxiv 2025.07] Democratizing High-Fidelity Co-Speech Gesture Video Generation  [[PDF](https://arxiv.org/abs/2507.06812),[Page](https://mpi-lab.github.io/Democratizing-CSG/)] ![Code](https://img.shields.io/github/stars/MPI-Lab/Democratizing-CSG?style=social&label=Star)\n\n[arxiv 2025.08] StableAvatar: Infinite-Length Audio-Driven Avatar Video Generation  [[PDF](https://arxiv.org/abs/2508.08248),[Page](https://francis-rings.github.io/StableAvatar/)] ![Code](https://img.shields.io/github/stars/Francis-Rings/StableAvatar?style=social&label=Star)\n\n[arxiv 2025.08]  FantasyTalking2: Timestep-Layer Adaptive Preference Optimization for Audio-Driven Portrait Animation [[PDF](https://arxiv.org/abs/2508.11255),[Page](https://fantasy-amap.github.io/fantasy-talking2/)] ![Code](https://img.shields.io/github/stars/Fantasy-AMAP/fantasy-talking2?style=social&label=Star)\n\n[arxiv 2025.08] InfiniteTalk: Audio-driven Video Generation for Sparse-Frame Video Dubbing [[PDF](https://arxiv.org/abs/2508.14033),[Page](https://meigen-ai.github.io/InfiniteTalk/)] ![Code](https://img.shields.io/github/stars/MeiGen-AI/InfiniteTalk?style=social&label=Star)\n\n[arxiv 2025.08] OmniHuman-1.5: Instilling an Active Mind in Avatars via Cognitive Simulation  [[PDF](https://arxiv.org/abs/2508.19209),[Page](https://omnihuman-lab.github.io/v1_5/)] \n\n[arxiv 2025.08]  Wan-S2V: Audio-Driven Cinematic Video Generation [[PDF](https://arxiv.org/abs/2508.18621)]\n\n[arxiv 2025.08]  MIDAS: Multimodal Interactive Digital-humAn Synthesis via Real-time Autoregressive Video Generation [[PDF](https://arxiv.org/pdf/2508.19320),[Page](https://chenmingthu.github.io/milm/)] \n\n[arxiv 2025.08] InfinityHuman: Towards Long-Term Audio-Driven Human  [[PDF](https://arxiv.org/abs/2508.20210),[Page](https://infinityhuman.github.io/)] \n\n[arxiv 2025.09]  HuMo: Human-Centric Video Generation via Collaborative Multi-Modal Conditioning [[PDF](https://arxiv.org/abs/2509.08519),[Page](https://phantom-video.github.io/HuMo/)] ![Code](https://img.shields.io/github/stars/Phantom-video/HuMo?style=social&label=Star)\n\n[arxiv 2025.09] Kling-Avatar: Grounding Multimodal Instructions for Cascaded Long-Duration Avatar Animation Synthesis  [[PDF](https://arxiv.org/abs/2509.09595),[Page](https://klingavatar.github.io/)] \n\n[arxiv 2025.10] X-Streamer: Unified Human World Modeling with Audiovisual Interaction  [[PDF](https://arxiv.org/abs/2509.21574),[Page](https://byteaigc.github.io/X-Streamer/)] \n\n[arxiv 2025.10]  VividAnimator: An End-to-End Audio and Pose-driven Half-Body Human Animation Framework [[PDF](https://arxiv.org/pdf/2510.10269)]\n\n[arxiv 2025.11] ConsistTalk: Intensity Controllable Temporally Consistent Talking Head Generation with Diffusion Noise Search  [[PDF](https://arxiv.org/pdf/2511.06833)]\n\n[arxiv 2025.12] AnyTalker: Scaling Multi-Person Talking Video Generation with Interactivity Refinement  [[PDF](https://arxiv.org/abs/2511.23475),[Page](https://hkust-c4g.github.io/AnyTalker-homepage/)] ![Code](https://img.shields.io/github/stars/HKUST-C4G/AnyTalker?style=social&label=Star)\n\n[arxiv 2025.12] Soul: Breathe Life into Digital Human for High-fidelity Long-term Multimodal Animation  [[PDF](https://arxiv.org/abs/2512.13495),[Page](https://zhangzjn.github.io/projects/Soul/)] ![Code](https://img.shields.io/github/stars/zhangzjn/Soul?style=social&label=Star)\n\n[arxiv 2025.12]  KlingAvatar 2.0 Technical Report [[PDF](https://arxiv.org/abs/2512.13313),[Page](https://app.klingai.com/global/ai-human/image/new/)] \n\n[arxiv 2025.12]  TalkVerse: Democratizing Minute-Long Audio-Driven Video Generation [[PDF](https://arxiv.org/abs/2512.14938),[Page](https://zhenzhiwang.github.io/talkverse/)] ![Code](https://img.shields.io/github/stars/snap-research/TalkVerse?style=social&label=Star)\n\n[arxiv 2025.12] ActAvatar: Temporally-Aware Precise Action Control for Talking Avatars  [[PDF](https://arxiv.org/abs/2512.19546),[Page](https://ziqiaopeng.github.io/ActAvatar/)] \n\n[arxiv 2026.01]  Making Avatars Interact Towards Text-Driven Human-Object Interaction for Controllable Talking Avatars [[PDF](https://arxiv.org/abs/2602.01538),[Page](https://interactavatar.github.io/)] ![Code](https://img.shields.io/github/stars/angzong/InteractAvatar?style=social&label=Star)\n\n[arxiv 2026.01]  JoyAvatar: Unlocking Highly Expressive Avatars via Harmonized Text-Audio Conditioning [[PDF](https://arxiv.org/abs/2602.00702),[Page](https://joyavatar.github.io/)] \n\n[arxiv 2026.03] Gloria: Consistent Character Video Generation via Content Anchors  [[PDF](https://arxiv.org/abs/2603.29931),[Page](https://yyvhang.github.io/Gloria_Page/)]\n\n\n[arxiv 2026.03]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)\n\n\n## Listen\n[arxiv 2025.04]  DiTaiListener: Controllable High Fidelity Listener Video Generation with Diffusion [[PDF](https://arxiv.org/abs/2504.04010),[Page](https://havent-invented.github.io/DiTaiListener)] \n\n[arxiv 2025.06] Diffusion-based Realistic Listening Head Generation via Hybrid Motion Modeling  [[PDF](https://openaccess.thecvf.com/content/CVPR2025/papers/Wang_Diffusion-based_Realistic_Listening_Head_Generation_via_Hybrid_Motion_Modeling_CVPR_2025_paper.pdf),[Page](https://nuo1wang.github.io/DiffListener/)] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)\n\n[arxiv 2026.03]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)\n\n\n## Realtime Avatar \n[arxiv 2025.12] JoyAvatar: Real-time and Infinite Audio-Driven Avatar Generation with Autoregressive Diffusion  [[PDF](https://arxiv.org/abs/2512.11423)] \n\n[arxiv 2025.12]  PersonaLive! Expressive Portrait Image Animation for Live Streaming [[PDF](https://arxiv.org/abs/2512.11253),[Page](https://github.com/GVCLab/PersonaLive)] ![Code](https://img.shields.io/github/stars/GVCLab/PersonaLive?style=social&label=Star)\n\n[arxiv 2025.12]  Live Avatar: Streaming Real-time Audio-Driven Avatar Generation with Infinite Length [[PDF](https://arxiv.org/abs/2512.04677),[Page](https://liveavatar.github.io/)] ![Code](https://img.shields.io/github/stars/Alibaba-Quark/LiveAvatar?style=social&label=Star)\n\n[arxiv 2025.12] StreamAvatar: Streaming Diffusion Models for Real-Time Interactive Human Avatars  [[PDF](https://arxiv.org/abs/2512.22065),[Page](https://streamavatar.github.io/)] \n\n[arxiv 2025.12]  Knot Forcing: Taming Autoregressive Video Diffusion Models for Real-time Infinite Interactive Portrait Animation [[PDF](https://arxiv.org/abs/2512.21734),[Page](https://humanaigc.github.io/knot_forcing_demo_page/)] \n\n[arxiv 2025.12]  SoulX-LiveTalk Technical Report [[PDF](https://arxiv.org/pdf/2512.23379),[Page](https://soul-ailab.github.io/soulx-livetalk/)] ![Code](https://img.shields.io/github/stars/Soul-AILab/SoulX-LiveTalk?style=social&label=Star)\n\n[arxiv 2026.01] Avatar Forcing: Real-Time Interactive Head Avatar Generation for Natural Conversation  [[PDF](https://arxiv.org/abs/2601.00664),[Page](https://taekyungki.github.io/AvatarForcing/)] ![Code](https://img.shields.io/github/stars/TaekyungKi/AvatarForcing?style=social&label=Star)\n\n[arxiv 2026.01] FlowAct-R1: Towards Interactive Humanoid Video Generation  [[PDF](https://arxiv.org/pdf/2601.10103),[Page](https://grisoon.github.io/FlowAct-R1/)] \n\n[arxiv 2026.02] EchoTorrent: Towards Swift, Sustained, and Streaming Multi-Modal Video Generation  [[PDF](https://arxiv.org/abs/2602.13669)]\n\n[arxiv 2026.03] AvatarForcing: One-Step Streaming Talking Avatars via Local-Future Sliding-Window Denoising  [[PDF](https://arxiv.org/abs/2603.14331),[Page](https://cuiliyuan121.github.io/AvatarForcing/)]\n\n[arxiv 2026.03] SoulX-LiveAct: Towards Hour-Scale Real-Time Human Animation with Neighbor Forcing and ConvKV Memory  [[PDF](https://arxiv.org/abs/2603.11746),[Page](https://demopagedemo.github.io/LiveAct/)] ![Code](https://img.shields.io/github/stars/Soul-AILab/SoulX-LiveAct?style=social&label=Star)\n\n[arxiv 2026.03]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)\n\n\n## Multi-person talking Video Generation \n[arxiv 2025.06]  Let Them Talk: Audio-Driven Multi-Person Conversational Video Generation [[PDF](https://arxiv.org/abs/2505.22647),[Page](https://meigen-ai.github.io/multi-talk/)] ![Code](https://img.shields.io/github/stars/MeiGen-AI/MultiTalk?style=social&label=Star)\n\n[arxiv 2025.06] InterActHuman: Multi-Concept Human Animation with Layout-Aligned Audio Conditions  [[PDF](https://arxiv.org/abs/2506.09984)]\n\n[arxiv 2025.08]  ShoulderShot: Generating Over-the-Shoulder Dialogue Videos [[PDF](https://arxiv.org/abs/2508.07597),[Page](https://shouldershot.github.io/)]\n\n[arxiv 2026.03]  InterDyad: Interactive Dyadic Speech-to-Video Generation by Querying Intermediate Visual Guidance [[PDF](https://arxiv.org/pdf/2603.23132),[Page](https://interdyad.github.io/)] \n[arxiv 2026.03]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)\n\n## HOI \n\n[arxiv 2024.11] AnchorCrafter: Animate CyberAnchors Saling Your Products via Human-Object Interacting Video Generation  [[PDF](https://arxiv.org/abs/2411.17383),[Page](https://cangcz.github.io/Anchor-Crafter/)] ![Code](https://img.shields.io/github/stars/cangcz/AnchorCrafter?style=social&label=Star)\n\n[arxiv 2026.03] DISPLAY: Directable Human-Object Interaction Video Generation via Sparse Motion Guidance and Multi-Task Auxiliary  [[PDF](https://arxiv.org/abs/2603.09883),[Page](https://mumuwei.github.io/DISPLAY/)]\n\n\n[arxiv 2026.03]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)\n\n\n## video-driven talking \n[arxiv 2025.04] DreamActor-M1: Holistic, Expressive and Robust Human Image Animation with Hybrid Guidance  [[PDF](https://arxiv.org/abs/2504.01724),[Page](https://grisoon.github.io/DreamActor-M1/)] \n\n[arxiv 2025.07]  FantasyPortrait: Enhancing Multi-Character Portrait Animation with Expression-Augmented Diffusion Transformers [[PDF](https://arxiv.org/abs/2507.12956),[Page](https://fantasy-amap.github.io/fantasy-portrait/)] ![Code](https://img.shields.io/github/stars/Fantasy-AMAP/fantasy-portrait?style=social&label=Star)\n\n[arxiv 2025.09] Wan-Animate: Unified Character Animation and Replacement with Holistic Replication  [[PDF](https://arxiv.org/abs/2509.14055),[Page](https://humanaigc.github.io/wan-animate/)] \n\n[arxiv 2026.03]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)\n\n\n\n## video dubbing\n[arxiv 2024.10] MuseTalk: Real-Time High Quality Lip Synchronization with Latent Space Inpainting [[PDF](https://arxiv.org/abs/2410.10122),[Page](https://github.com/TMElyralab/MuseTalk)]\n\n[arxiv 2024.12]  LatentSync: Audio Conditioned Latent Diffusion Models for Lip Sync [[PDF](https://arxiv.org/abs/2412.09262),[Page](https://github.com/bytedance/LatentSync)] ![Code](https://img.shields.io/github/stars/bytedance/LatentSync?style=social&label=Star)\n\n[arxiv 2025.03]  RASA: Replace Anyone, Say Anything – A Training-Free Framework for Audio-Driven and Universal Portrait Video Editing [[PDF](https://arxiv.org/abs/2503.11571),[Page](https://alice01010101.github.io/RASA/)]\n\n[arxiv 2025.04]  DeepDubber-V1: Towards High Quality and Dialogue, Narration, Monologue Adaptive Movie Dubbing Via Multi-Modal Chain-of-Thoughts Reasoning Guidance. [[PDF](https://arxiv.org/abs/2503.23660)]\n\n[arxiv 2025.04]  VoiceCraft-Dub: Automated Video Dubbing with Neural Codec Language Models [[PDF](https://arxiv.org/abs/2504.02386),[Page](https://voicecraft-dub.github.io/)] \n\n[arxiv 2025.05]  FlowDubber: Movie Dubbing with LLM-based Semantic-aware Learning and Flow Matching based Voice Enhancing [[PDF](https://arxiv.org/abs/2505.01263),[Page](https://galaxycong.github.io/LLM-Flow-Dubber/)] \n\n[arxiv 2025.05]  KeySync: A Robust Approach for Leakage-free Lip Synchronization in High Resolution [[PDF](https://arxiv.org/abs/2505.00497),[Page](https://antonibigata.github.io/KeySync/)] ![Code](https://img.shields.io/github/stars/antonibigata/keysync?style=social&label=Star)\n\n[arxiv 2025.06] SkyReels-Audio: Omni Audio-Conditioned Talking Portraits in Video Diffusion Transformers  [[PDF](https://arxiv.org/pdf/2506.00830),[Page](https://skyworkai.github.io/skyreels-audio.github.io/)] \n\n[arxiv 2025.09] StableDub: Taming Diffusion Prior for Generalized and Efficient Visual Dubbing  [[PDF](https://arxiv.org/abs/2509.21887),[Page](https://stabledub.github.io/)] \n\n[arxiv 2025.12]  SyncAnyone: Implicit Disentanglement via Progressive Self-Correction for Lip-Syncing in the wild [[PDF](https://arxiv.org/abs/2512.21736),[Page](https://humanaigc.github.io/sync_anyone_demo_page/)] \n\n[arxiv 2025.12] From Inpainting to Editing: A Self-Bootstrapping Framework for Context-Rich Visual Dubbing  [[PDF](https://arxiv.org/abs/2512.25066),[Page](https://hjrphoebus.github.io/X-Dub/)] ![Code](https://img.shields.io/github/stars/hjrPhoebus/X-Dub?style=social&label=Star)\n\n[arxiv 2026.03] OmniEdit: A Training-free framework for Lip Synchronization and Audio-Visual Editing  [[PDF](https://arxiv.org/pdf/2603.09084),[Page](https://github.com/l1346792580123/OmniEdit)] ![Code](https://img.shields.io/github/stars/l1346792580123/OmniEdit?style=social&label=Star)\n\n\n[arxiv 2026.03]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)\n\n\n## chatting\n[arxiv 2025.09]  X-Streamer: Unified Human World Modeling with Audiovisual Interaction [[PDF](https://arxiv.org/abs/2509.21574),[Page](https://byteaigc.github.io/X-Streamer/)] \n\n[arxiv 2026.03]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)\n\n\n\n## TTS \n[arxiv 2025.07] IndexTTS2: A Breakthrough in Emotionally Expressive and Duration-Controlled Auto-Regressive Zero-Shot Text-to-Speech  [[PDF](https://arxiv.org/pdf/2506.21619)]\n\n[arxiv 2025.02] IndexTTS: An Industrial-Level Controllable and Efficient Zero-Shot Text-To-Speech System  [[PDF](https://arxiv.org/pdf/2502.05512),[Page](https://index-tts.github.io/)] ![Code](https://img.shields.io/github/stars/index-tts/index-tts?style=social&label=Star)\n\n\n[arxiv 2026.03]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)\n\n\n## HOI\n[arxiv 2026.03] MVHOI: Bridge Multi-view Condition to Complex Human-Object Interaction Video Reenactment via 3D Foundation Model  [[PDF](https://arxiv.org/abs/2603.14686)]\n\n[arxiv 2026.03]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)\n\n\n## duplex \n[arxiv 2025.05]  DualTalk: Dual-Speaker Interaction for 3D Talking Head Conversations [[PDF](https://arxiv.org/pdf/2505.18096),[Page](https://ziqiaopeng.github.io/dualtalk/)] \n\n[arxiv 2025.07]  ARIG: Autoregressive Interactive Head Generation for Real-time Conversations [[PDF](https://arxiv.org/abs/2507.00472),[Page](https://jinyugy21.github.io/ARIG/)] \n\n\n[arxiv 2026.03]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)\n\n## Interaction\n[arxiv 2025.07]  Populate-A-Scene: Affordance-Aware Human Video Generation [[PDF](https://arxiv.org/pdf/2507.00334)]\n\n[arxiv 2025.10] MATRIX: Mask Track Alignment for Interaction-aware Video Generation  [[PDF](https://arxiv.org/abs/2510.07310),[Page](https://cvlab-kaist.github.io/MATRIX/)] ![Code](https://img.shields.io/github/stars/cvlab-kaist/MATRIX?style=social&label=Star)\n\n[arxiv 2025.11] StreamDiffusionV2: A Streaming System for Dynamic and Interactive Video Generation  [[PDF](https://arxiv.org/abs/2511.07399),[Page](http://streamdiffusionv2.github.io/)] ![Code](https://img.shields.io/github/stars/chenfengxu714/StreamDiffusionV2?style=social&label=Star)\n\n\n[arxiv 2026.03]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)\n\n\n\n## Ego\n[arxiv 2025.06]  PlayerOne: Egocentric World Simulator [[PDF](https://arxiv.org/pdf/2506.09995),[Page](https://playerone-hku.github.io/)] ![Code](https://img.shields.io/github/stars/yuanpengtu/PlayerOne?style=social&label=Star)\n\n\n## Face swapping \n[arxiv 2024.12] HiFiVFS: High Fidelity Video Face Swapping  [[PDF](https://arxiv.org/abs/2411.18293),[Page](https://cxcx1996.github.io/HiFiVFS/)] \n\n[arxiv 2025.03] High-Fidelity Diffusion Face Swapping with ID-Constrained Facial Conditioning  [[PDF](https://arxiv.org/abs/2503.22179)]\n\n[arxiv 2025.06]  Controllable and Expressive One-Shot Video Head Swapping [[PDF](https://humanaigc.github.io/SwapAnyHead/),[Page](https://guoxu1233.github.io/DreamID-V/)] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)\n\n[arxiv 2026.01] DreamID-V:Bridging the Image-to-Video Gap for High-Fidelity Face Swapping via Diffusion Transformer  [[PDF](https://arxiv.org/abs/2601.01425),[Page]()] ![Code](https://img.shields.io/github/stars/bytedance/DreamID-V?style=social&label=Star)\n\n[arxiv 2026.03]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)\n\n\n\n## Image-to-video Generation \n[arxiv 2023.09]VideoGen: A Reference-Guided Latent Diffusion Approach for High Definition Text-to-Video Generation [[PDF](https://arxiv.org/abs/2309.00398)]\n\n[arxiv 2023.09]Generative Image Dynamics [[PDF](https://arxiv.org/abs/2309.07906),[Page](http://generative-dynamics.github.io/)]\n\n[arxiv 2023.10]DynamiCrafter: Animating Open-domain Images with Video Diffusion Priors [[PDF](https://arxiv.org/abs/2310.12190), [Page](https://github.com/AILab-CVC/VideoCrafter)]\n\n[arxiv 2023.11]SEINE: Short-to-Long Video Diffusion Model for Generative Transition and Prediction [[PDF](https://arxiv.org/abs/2310.20700),[Page](https://vchitect.github.io/SEINE-project/)]\n\n[arxiv 2023.11]I2VGen-XL: High-Quality Image-to-Video Synthesis via Cascaded Diffusion Models\n[[PDF](https://arxiv.org/abs/2311.04145),[Page](https://i2vgen-xl.github.io/page04.html)]\n\n[arxiv 2023.11]Emu Video: Factorizing Text-to-Video Generation by Explicit Image Conditioning [[PDF](https://arxiv.org/abs/2311.10709),[Page](https://emu-video.metademolab.com/)]\n\n[arxiv 2023.11]MoVideo: Motion-Aware Video Generation with Diffusion Models[[PDF](https://github.com/JingyunLiang/MoVideo/releases/download/v0.0/MoVideo.pdf),[Page](https://jingyunliang.github.io/MoVideo/)]\n\n[arxiv 2023.11]Make Pixels Dance: High-Dynamic Video Generation[[PDF](),[Page](https://makepixelsdance.github.io/)]\n\n[arxiv 2023.11]Decouple Content and Motion for Conditional Image-to-Video Generation [[PDF](https://arxiv.org/abs/2311.14294)]\n\n[arxiv 2023.12]ART•V: Auto-Regressive Text-to-Video Generation with Diffusion Models [[PDF](https://arxiv.org/abs/2311.18834), [Page](https://warranweng.github.io/art.v)]\n\n[arxiv 2023.12]MicroCinema: A Divide-and-Conquer Approach for Text-to-Video Generation [[PDF](https://arxiv.org/abs/2311.18829), [Page](https://wangyanhui666.github.io/MicroCinema.github.io/)]\n\n[arxiv 2023.12]DreamVideo: High-Fidelity Image-to-Video Generation with Image Retention and Text Guidance [[PDF](https://arxiv.org/abs/2312.03018),[Page](https://anonymous0769.github.io/DreamVideo/)]\n\n[arxiv 2023.12]LivePhoto: Real Image Animation with Text-guided Motion Control [[PDF](https://arxiv.org/abs/2312.02928), [Page](https://xavierchen34.github.io/LivePhoto-Page/)]\n\n[arxiv 2023.12]I2V-Adapter: A General Image-to-Video Adapter for Video Diffusion Models [[PDF](https://arxiv.org/abs/2312.16693)]\n\n[arxiv 2023.11] Emu Video: Factorizing Text-to-Video Generation by Explicit Image Conditioning [[PDF](https://arxiv.org/abs/2311.10709),[Page](https://emu-video.metademolab.com/)]\n\n[arxiv 2024.01]UniVG: Towards UNIfied-modal Video Generation [[PDF](https://arxiv.org/abs/2401.09084),[Page](https://univg-baidu.github.io/)]\n\n[arxiv 2024.03]Tuning-Free Noise Rectification for High Fidelity Image-to-Video Generation [[PDF](https://arxiv.org/abs/2403.02827),[Page](https://noise-rectification.github.io/)]\n\n[arxiv 2024.03]AtomoVideo: High Fidelity Image-to-Video Generation [[PDF](https://arxiv.org/abs/2403.01800),[Page](https://atomo-video.github.io/)]\n\n[arxiv 2024.03]Pix2Gif: Motion-Guided Diffusion for GIF Generation[[PDF](https://arxiv.org/abs/2403.04634),[Page](https://hiteshk03.github.io/Pix2Gif/)]\n\n[arxiv 2024.03]Follow-Your-Click: Open-domain Regional Image Animation via Short Prompts [[PDF](https://arxiv.org/abs/2403.08268),[Page](https://github.com/mayuelala/FollowYourClick)]\n\n[arxiv 2024.03]TimeRewind: Rewinding Time with Image-and-Events Video Diffusion [[PDF](https://arxiv.org/abs/2403.13800),[Page](https://timerewind.github.io/)]\n\n[arxiv 2024.03]TRIP: Temporal Residual Learning with Image Noise Prior for Image-to-Video Diffusion Models [[PDF](https://arxiv.org/abs/2403.17005),[Page](https://trip-i2v.github.io/TRIP/)]\n\n[arxiv 2024.04]LASER: Tuning-Free LLM-Driven Attention Control for Efficient Text-conditioned Image-to-Animation [[PDF](https://arxiv.org/abs/2404.13558)]\n\n[arxiv 2024.04]TI2V-Zero: Zero-Shot Image Conditioning for Text-to-Video Diffusion Models [[PDF](https://arxiv.org/abs/2404.16306),[Page](https://merl.com/research/highlights/TI2V-Zero)]\n\n[arxiv 2024.06] I4VGen: Image as Stepping Stone for Text-to-Video Generation[[PDF](https://arxiv.org/abs/2406.02230),[Page](https://xiefan-guo.github.io/i4vgen/)]\n\n[arxiv 2024.06] AID: Adapting Image2Video Diffusion Models for Instruction-based Video Prediction[[PDF](https://arxiv.org/abs/2406.06465),[Page](https://chenhsing.github.io/AID/)]\n\n[arxiv 2024.06] Identifying and Solving Conditional Image Leakage in Image-to-Video Generation[[PDF](https://arxiv.org/pdf/2406.15735),[Page](https://cond-image-leak.github.io/)]\n\n[arxiv 2024.07]Cinemo: Consistent and Controllable Image Animation with Motion Diffusion Models [[PDF](https://arxiv.org/abs/2407.15642),[Page](https://maxin-cn.github.io/cinemo_project/)]\n\n[arxiv 2024.09] PhysGen: Rigid-Body Physics-Grounded Image-to-Video Generation [[PDF](https://arxiv.org/abs/2409.18964),[Page](https://stevenlsw.github.io/physgen/)]\n\n[arxiv 2024.10] FrameBridge: Improving Image-to-Video Generation with Bridge Models [[PDF](https://arxiv.org/abs/2410.15371),[Page](https://framebridge-demo.github.io/)]\n\n[arxiv 2025.01] Through-The-Mask: Mask-based Motion Trajectories for Image-to-Video Generation [[PDF](https://arxiv.org/abs/1234.56789),[Page](https://guyyariv.github.io/TTM/)] \n\n[arxiv 2025.02] MotionAgent: Fine-grained Controllable Video Generation via Motion Field Agent  [[PDF](https://arxiv.org/pdf/2502.03207)]\n\n\n\n[arxiv 2026.03]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)\n\n\n## 4D generation \n[arxiv 2023.11]Animate124: Animating One Image to 4D Dynamic Scene [[PDF](https://arxiv.org/abs/2311.14603),[Page](https://animate124.github.io/)]\n\n[arxiv 2023.12]4D-fy: Text-to-4D Generation Using Hybrid Score Distillation Sampling[[PDF](https://arxiv.org/abs/2311.17984), [Page](https://sherwinbahmani.github.io/4dfy)]\n\n[arxiv 2023.12]4DGen: Grounded 4D Content Generation with Spatial-temporal Consistency [[PDF](https://arxiv.org/abs/2312.17225),[Page](https://vita-group.github.io/4DGen/)]\n\n[arxiv 2023.12]DreamGaussian4D: Generative 4D Gaussian Splatting [[PDF](https://arxiv.org/abs/2312.17142), [Page](https://jiawei-ren.github.io/projects/dreamgaussian4d)]\n\n[arxiv 2024.10] AvatarGO: Zero-shot 4D Human-Object Interaction Generation and Animation [[PDF](https://yukangcao.github.io/AvatarGO/),[Page](https://yukangcao.github.io/AvatarGO/)]\n\n[arxiv 2024.10] Trans4D: Realistic Geometry-Aware Transition for Compositional Text-to-4D Synthesis [[PDF](https://arxiv.org/abs/2410.07155),[Page](https://github.com/YangLing0818/Trans4D)]\n\n[arxiv 2024.11] DimensionX: Create Any 3D and 4D Scenes from a Single Image with Controllable Video Diffusion [[PDF](https://arxiv.org/abs/2411.04928),[Page](https://chenshuo20.github.io/DimensionX/)]\n\n[arxiv 2024.12] Diffusion Self-Distillation for Zero-Shot Customized Image Generation  [[PDF](https://arxiv.org/abs/2411.18613),[Page](https://cat-4d.github.io/)] \n\n[arxiv 2024.12] PaintScene4D: Consistent 4D Scene Generation from Text Prompts  [[PDF](https://arxiv.org/abs/2412.04471),[Page](https://paintscene4d.github.io/)] ![Code](https://img.shields.io/github/stars/paintscene4d/paintscene4d.github.io?style=social&label=Star)\n\n[arxiv 2024.12]  4Real-Video: Learning Generalizable Photo-Realistic 4D Video Diffusion [[PDF](https://arxiv.org/abs/2412.04462),[Page](https://snap-research.github.io/4Real-Video/)] \n\n[arxiv 2024.12]  Birth and Death of a Rose [[PDF](https://arxiv.org/abs/2412.05278),[Page](https://chen-geng.com/rose4d)] \n\n[arxiv 2024.12]  DNF: Unconditional 4D Generation with Dictionary-based Neural Fields [[PDF](https://arxiv.org/abs/2412.05161),[Page](https://xzhang-t.github.io/project/DNF/)] \n\n[arxiv 2025.01] AR4D: Autoregressive 4D Generation from Monocular Videos [[PDF](https://arxiv.org/abs/2501.01722),[Page](https://hanxinzhu-lab.github.io/AR4D/)] \n\n[arxiv 2025.02] MVTokenFlow: High-quality 4D Content Generation using Multiview Token Flow  [[PDF](https://arxiv.org/abs/2502.11697),[Page](https://soolab.github.io/MVTokenFlow)] ![Code](https://img.shields.io/github/stars/SooLab/MVTokenFlow?style=social&label=Star)\n\n[arxiv 2025.03]  SV4D 2.0: Enhancing Spatio-Temporal Consistency in Multi-View Video Diffusion for High-Quality 4D Generation [[PDF](https://arxiv.org/pdf/2503.16396)]\n\n[arxiv 2025.03] Free4D: Tuning-free 4D Scene Generation with Spatial-Temporal Consistency  [[PDF](https://arxiv.org/abs/2503.20785),[Page](https://free4d.github.io/)] ![Code](https://img.shields.io/github/stars/TQTQliu/Free4D?style=social&label=Star)\n\n[arxiv 2025.03] Zero4D: Training-Free 4D Video Generation From Single Video Using Off-the-Shelf Video Diffusion Model  [[PDF](https://arxiv.org/abs/2503.22622),[Page](https://zero4dvid.github.io/)] \n\n[arxiv 2025.04] Video4DGen: Enhancing Video and 4D Generation through Mutual Optimization  [[PDF](https://arxiv.org/abs/2504.04153),[Page](https://video4dgen.github.io/)] ![Code](https://img.shields.io/github/stars/yikaiw/Vidu4D?style=social&label=Star)\n\n[arxiv 2025.06] ORV: 4D Occupancy-centric Robot Video Generation  [[PDF](https://arxiv.org/abs/2506.03079),[Page](https://orangesodahub.github.io/ORV/)] ![Code](https://img.shields.io/github/stars/OrangeSodahub/ORV?style=social&label=Star)\n\n[arxiv 2025.07]  Diffuman4D: 4D Consistent Human View Synthesis from Sparse-View Videos with Spatio-Temporal Diffusion Models [[PDF](https://arxiv.org/abs/2507.13344),[Page](https://diffuman4d.github.io/)] ![Code](https://img.shields.io/github/stars/zju3dv/Diffuman4D?style=social&label=Star)\n\n[arxiv 2025.08]  4DVD: Cascaded Dense-view Video Diffusion Model for High-quality 4D Content Generation [[PDF](https://arxiv.org/abs/2508.04467),[Page](https://4dvd.github.io/)] \n\n[arxiv 2025.08] Dream4D: Lifting Camera-Controlled I2V towards Spatiotemporally Consistent 4D Generation  [[PDF](https://arxiv.org/abs/2410.15957),[Page](https://wanderer7-sk.github.io/Dream4D.github.io/)]\n\n[arxiv 2025.08] 4DNeX: Feed-Forward 4D Generative Modeling Made Easy  [[PDF](https://arxiv.org/abs/2508.13154),[Page](https://4dnex.github.io/)] ![Code](https://img.shields.io/github/stars/3DTopia/4DNeX?style=social&label=Star)\n\n[arxiv 2026.01]  Motion 3-to-4: 3D Motion Reconstruction for 4D Synthesis [[PDF](https://arxiv.org/abs/2601.14253),[Page](https://motion3-to-4.github.io/)] ![Code](https://img.shields.io/github/stars/Inception3D/Motion324?style=social&label=Star)\n\n[arxiv 2026.03] VGGRPO: Towards World-Consistent Video Generation with 4D Latent Reward  [[PDF](https://arxiv.org/abs/2603.26599),[Page](https://zhaochongan.github.io/projects/VGGRPO)]\n\n\n[arxiv 2026.03]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)\n\n\n## stereo video generation\n[arxiv 2025.05] HoloTime: Taming Video Diffusion Models for Panoramic 4D Scene Generation  [[PDF](https://arxiv.org/abs/2504.21650),[Page](https://zhouhyocean.github.io/holotime/)] ![Code](https://img.shields.io/github/stars/PKU-YuanGroup/HoloTime?style=social&label=Star)\n\n[arxiv 2026.03] Stereo World Model: Camera-Guided Stereo Video Generation  [[PDF](https://arxiv.org/abs/2603.17375),[Page](https://sunyangtian.github.io/StereoWorld-web/)] ![Code](https://img.shields.io/github/stars/SunYangtian/StereoWorld?style=social&label=Star)\n\n\n[arxiv 2026.03]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)\n\n\n\n## Audio-to-video Generation\n[arxiv 2023.09]Diverse and Aligned Audio-to-Video Generation via Text-to-Video Model Adaptation [[PDF](https://arxiv.org/abs/2309.16429)]\n\n[arxiv 2024.02]Seeing and Hearing Open-domain Visual-Audio Generation with Diffusion Latent Aligners [[PDF](https://arxiv.org/abs/2402.17723),[Page](https://yzxing87.github.io/Seeing-and-Hearing/)]\n\n[arxiv 2024.04]TAVGBench: Benchmarking Text to Audible-Video Generation [[PDF](https://arxiv.org/abs/2404.14381),[Page](https://github.com/OpenNLPLab/TAVGBench)]\n\n[arxiv 2024.09] Draw an Audio: Leveraging Multi-Instruction for Video-to-Audio Synthesis [[PDF](https://arxiv.org/abs/2409.06135),[Page](https://yannqi.github.io/Draw-an-Audio/)]\n\n[arxiv 2024.11] Tell What You Hear From What You See -- Video to Audio Generation Through Text [[PDF](https://arxiv.org/abs/2411.05679),[Page](https://github.com/Christina200/Online-LoRA-official)]\n\n[arxiv 2024.12]  AV-Link: Temporally-Aligned Diffusion Features for Cross-Modal Audio-Video Generation [[PDF](https://arxiv.org/abs/2412.15191),[Page](https://snap-research.github.io/AVLink/)] ![Code](https://img.shields.io/github/stars/snap-research/AVLink?style=social&label=Star)\n\n[arxiv 2024.12] Every Image Listens, Every Image Dances: Music-Driven Image Animation  [[PDF](https://arxiv.org/html/2501.18801v1)]\n\n[arxiv 2025.02]  AGAV-Rater: Enhancing LMM for AI-Generated Audio-Visual Quality Assessment [[PDF](https://arxiv.org/abs/2501.18314),[Page](https://agav-rater.github.io/)] \n\n[arxiv 2025.02] UniForm: A Unified Diffusion Transformer for Audio-Video Generation  [[PDF](https://arxiv.org/abs/2502.03897),[Page](https://uniform-t2av.github.io/)] \n\n[arxiv 2025.03]  MusicInfuser: Making Video Diffusion Listen and Dance [[PDF](https://arxiv.org/abs/2503.14505),[Page](https://susunghong.github.io/MusicInfuser/)] ![Code](https://img.shields.io/github/stars/SusungHong/MusicInfuser?style=social&label=Star)\n\n[arxiv 2025.03] Zero-Shot Audio-Visual Editing via Cross-Modal Delta Denoising  [[PDF](https://arxiv.org/abs/2503.20782),[Page](https://genjib.github.io/project_page/AVED/index.html)] \n\n[arxiv 2025.04] KeyVID: Keyframe-Aware Video Diffusion for Audio-Synchronized Visual Animation  [[PDF](https://arxiv.org/pdf/2504.09656),[Page](https://github.com/XingruiWang/KeyVID)] ![Code](https://img.shields.io/github/stars/XingruiWang/KeyVID?style=social&label=Star)\n\n[arxiv 2025.06]  Audio-Sync Video Generation with Multi-Stream Temporal Control [[PDF](https://arxiv.org/pdf/2506.08003),[Page](https://hjzheng.net/projects/MTV/)] ![Code](https://img.shields.io/github/stars/suimuc/MTV_Framework?style=social&label=Star)\n\n[arxiv 2025.09] Syncphony: Synchronized Audio-to-Video Generation with Diffusion Transformers  [[PDF](https://arxiv.org/abs/2509.21893),[Page](https://jibin86.github.io/syncphony_project_page/)] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)\n\n[arxiv 2026.03]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)\n\n\n## Joint Generation\n[arxiv 2025.04]  JavisDiT: Joint Audio-Video Diffusion Transformer with Hierarchical Spatio-Temporal Prior Synchronization [[PDF](https://arxiv.org/pdf/2503.23377.pdf),[Page](https://javisdit.github.io/)] ![Code](https://img.shields.io/github/stars/JavisDiT/JavisDiT?style=social&label=Star)\n\n[arxiv 2025.06]  Audio-Sync Video Generation with Multi-Stream Temporal Control [[PDF](https://arxiv.org/pdf/2506.08003),[Page](https://hjzheng.net/projects/MTV/)] ![Code](https://img.shields.io/github/stars/suimuc/MTV_Framework?style=social&label=Star)\n\n[arxiv 2025.07]  JAM-Flow: Joint Audio-Motion Synthesis with Flow Matching [[PDF](https://arxiv.org/abs/2506.23552),[Page](https://joonghyuk.com/jamflow-web/)] \n\n[arxiv 2025.08] AudioGen-Omni: A Unified Multimodal Diffusion Transformer for Video-Synchronized Audio, Speech, and Song Generation  [[PDF](https://arxiv.org/abs/2508.00733),[Page](https://ciyou2.github.io/AudioGen-Omni/)] \n\n[arxiv 2025.09] UniVerse-1: Unified Audio-Video Generation via Stitching of Experts  [[PDF](https://arxiv.org/pdf/2509.06155),[Page](https://dorniwang.github.io/UniVerse-1/)] ![Code](https://img.shields.io/github/stars/Dorniwang/UniVerse-1-code?style=social&label=Star)\n\n[arxiv 2025.10]  Ovi: Twin Backbone Cross-Modal Fusion for Audio-Video Generation [[PDF](https://arxiv.org/abs/2510.01284),[Page](https://aaxwaz.github.io/Ovi/)] ![Code](https://img.shields.io/github/stars/character-ai/Ovi?style=social&label=Star)\n\n[arxiv 2025.10] Taming Text-to-Sounding Video Generation via Advanced Modality Condition and Interaction  [[PDF](https://arxiv.org/abs/2510.03117),[Page](https://bridgedit-t2sv.github.io/)] \n\n[arxiv 2025.11]  UniAVGen: Unified Audio and Video Generation with Asymmetric Cross-Modal Interactions [[PDF](https://arxiv.org/pdf/2511.03334),[Page](https://mcg-nju.github.io/UniAVGen/)] \n\n[arxiv 2025.12]  JoVA: Unified Multimodal Learning for Joint Video-Audio Generation [[PDF](https://arxiv.org/abs/2512.13677),[Page](https://visual-ai.github.io/jova/)] ![Code](https://img.shields.io/github/stars/Visual-AI/JoVA?style=social&label=Star)\n\n[arxiv 2026.01] LTX-2: Efficient Joint Audio-Visual Foundation Model  [[PDF](https://arxiv.org/abs/2601.03233),[Page](https://github.com/Lightricks/LTX-2)] ![Code](https://img.shields.io/github/stars/Lightricks/LTX-2?style=social&label=Star)\n\n[arxiv 2026.01] MM-Sonate: Multimodal Controllable Audio-Video Generation with Zero-Shot Voice Cloning  [[PDF](https://arxiv.org/pdf/2601.01568)]\n\n[arxiv 2026.01]  Klear: Unified Multi-Task Audio-Video Joint Generation [[PDF](https://arxiv.org/abs/2601.04151)]\n\n[arxiv 2026.02] MOVA: Towards Scalable and Synchronized Video-Audio Generation  [[PDF](https://arxiv.org/abs/2602.08794),[Page](https://mosi.cn/models/mova)] ![Code](https://img.shields.io/github/stars/OpenMOSS/MOVA?style=social&label=Star)\n\n[arxiv 2026.02] Alive: Animate Your World with Lifelike Audio-Video Generation  [[PDF](https://arxiv.org/pdf/2602.08682),[Page](https://foundationvision.github.io/Alive/)] ![Code](https://img.shields.io/github/stars/FoundationVision/Alive?style=social&label=Star)\n\n[arxiv 2026.02]  JavisDiT++: Unified Modeling and Optimization for Joint Audio-Video Generation [[PDF](https://arxiv.org/abs/2602.19163),[Page](https://javisverse.github.io/JavisDiT2-page)] ![Code](https://img.shields.io/github/stars/JavisVerse/JavisDiT?style=social&label=Star)\n\n[arxiv 2026.02]  SkyReels-V4: Multi-modal Video-Audio Generation, Inpainting and Editing model [[PDF](https://arxiv.org/abs/2602.21818)]\n\n[arxiv 2026.03] OmniForcing: Unleashing Real-time Joint Audio-Visual Generation  [[PDF](https://arxiv.org/abs/2603.11647),[Page](https://omniforcing.com/)] ![Code](https://img.shields.io/github/stars/OmniForcing/OmniForcing?style=social&label=Star)\n\n[arxiv 2026.03] Improving Joint Audio-Video Generation with Cross-Modal Context Learning  [[PDF](https://arxiv.org/abs/2603.18600)]\n\n\n[arxiv 2026.03]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)\n\n# joint ID\n[arxiv 2026.02]  DreamID-Omni: Unified Framework for Controllable Human-Centric Audio-Video Generation [[PDF](https://arxiv.org/abs/2602.12160),[Page](https://guoxu1233.github.io/DreamID-Omni/)] ![Code](https://img.shields.io/github/stars/Guoxu1233/DreamID-Omni?style=social&label=Star)\n\n[arxiv 2026.02] OmniCustom: Sync Audio-Video Customization Via Joint Audio-Video Generation Model  [[PDF](https://arxiv.org/abs/2602.12304),[Page](https://omnicustom-project.github.io/page/)] ![Code](https://img.shields.io/github/stars/OmniCustom-project/OmniCustom?style=social&label=Star)\n\n\n[arxiv 2026.03]  Identity as Presence: Towards Appearance and Voice Personalized Joint Audio-Video Generation [[PDF](https://arxiv.org/pdf/2603.17889),[Page](https://chen-yingjie.github.io/projects/Identity-as-Presence/index.html)] ![Code](https://img.shields.io/github/stars/WeChatCV/Identity-as-Presence?style=social&label=Star)\n\n[arxiv 2026.03]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)\n\n\n\n## video-to-audio generation \n[arxiv 2024.07] Read, Watch and Scream! Sound Generation from Text and Video\n[[PDF](https://arxiv.org/abs/2407.05551), [Page](https://naver-ai.github.io/rewas)]\n\n[arxiv 2025.03] AudioX: Diffusion Transformer for Anything-to-Audio Generation  [[PDF](https://arxiv.org/abs/2503.10522/),[Page](https://zeyuet.github.io/AudioX/)] ![Code](https://img.shields.io/github/stars/ZeyueT/AudioX?style=social&label=Star)\n\n[arxiv 2025.03]  DeepAudio-V1:Towards Multi-Modal Multi-Stage End-to-End Video to Speech and Audio Generation [[PDF](https://arxiv.org/abs/2503.22265)]\n\n[arxiv 2025.04] Extending Visual Dynamics for Video-to-Music Generation  [[PDF](https://arxiv.org/abs/2504.07594)】\n\n[arxiv 2025.06] Hearing Hands: Generating Sounds from Physical Interactions in 3D Scenes  [[PDF](https://openaccess.thecvf.com/content/CVPR2025/papers/Dou_Hearing_Hands_Generating_Sounds_from_Physical_Interactions_in_3D_Scenes_CVPR_2025_paper.pdf),[Page](https://www.yimingdou.com/hearing_hands/)] \n\n[arxiv 2025.07] Hear-Your-Click: Interactive Video-to-Audio Generation via Object-aware Contrastive Audio-Visual Fine-tuning  [[PDF](http://arxiv.org/abs/2507.04959),[Page](https://github.com/SynapGrid/Hear-Your-Click)] ![Code](https://img.shields.io/github/stars/SynapGrid/Hear-Your-Click?style=social&label=Star)\n\n[arxiv 2025.07]  ThinkSound: Chain-of-Thought Reasoning in Multimodal Large Language Models for Audio Generation and Editing [[PDF](https://arxiv.org/html/2506.21448v1),[Page](https://thinksound-demo.github.io/)] ![Code](https://img.shields.io/github/stars/FunAudioLLM/ThinkSound?style=social&label=Star)\n\n[arxiv 2025.07]  AudioGen-Omni: A Unified Multimodal Diffusion Transformer for Video-Synchronized Audio, Speech, and Song Generation [[PDF](https://arxiv.org/abs/2508.00733),[Page](https://ciyou2.github.io/AudioGen-Omni/)] \n\n[arxiv 2025.08] HunyuanVideo-Foley: Multimodal Diffusion with Representation Alignment for High-Fidelity Foley Audio Generation  [[PDF](https://arxiv.org/abs/2508.16930),[Page](https://szczesnys.github.io/hunyuanvideo-foley/)] ![Code](https://img.shields.io/github/stars/Tencent-Hunyuan/HunyuanVideo-Foley?style=social&label=Star)\n\n[arxiv 2025.08] AudioStory: Generating Long-Form Narrative Audio with Large Language Models  [[PDF](https://arxiv.org/abs/2508.20088),[Page](https://github.com/TencentARC/AudioStory)] ![Code](https://img.shields.io/github/stars/TencentARC/AudioStory?style=social&label=Star)\n\n[arxiv 2025.08] AudioGen-Omni: A Unified Multimodal Diffusion Transformer for Video-Synchronized Audio, Speech, and Song Generation  [[PDF](https://arxiv.org/abs/2508.00733),[Page](https://ciyou2.github.io/AudioGen-Omni/)] \n\n[arxiv 2025.09] Kling-Foley: Multimodal Diffusion Transformer for High-Quality Video-to-Audio Generation  [[PDF](https://arxiv.org/abs/2506.19774),[Page](https://klingfoley.github.io/Kling-Foley/)] ![Code](https://img.shields.io/github/stars/klingfoley/Kling-Foley?style=social&label=Star)\n\n[arxiv 2025.10] Clink! Chop! Thud! — Learning Object Sounds from Real-World Interactions  [[PDF](https://arxiv.org/abs/2510.02313),[Page](https://clink-chop-thud.github.io/)]\n\n[arxiv 2025.10]  Foley Control: Aligning a Frozen Latent Text-to-Audio Model to Video [[PDF](https://stability-ai.github.io/foleycontrol.github.io/FoleyControl/Foley_Control_Final.pdf),[Page](https://stability-ai.github.io/foleycontrol.github.io/)] \n\n[arxiv 2025.12]  EchoFoley: Event-Centric Hierarchical Control for Video Grounded Creative Sound Generation [[PDF](https://arxiv.org/pdf/2512.24731),[Page](https://echofoley.github.io/)] \n\n[arxiv 2026.01]  Omni2Sound: Towards Unified Video-Text-to-Audio Generation [[PDF](https://arxiv.org/pdf/2601.02731),[Page](https://swapforward.github.io/Omni2Sound/)] \n\n[arxiv 2026.01]  SpatialV2A: Visual-Guided High-fidelity Spatial Audio Generation [[PDF](https://arxiv.org/abs/2601.15017)]\n\n[arxiv 2026.03]  V2M-Zero: Zero-Pair Time-Aligned Video-to-Music Generation [[PDF](https://arxiv.org/abs/2603.11042),[Page](https://genjib.github.io/project_page/v2m_zero/)] \n\n[arxiv 2026.03]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)\n\n\n## unified editing and generation\n[arxiv 2025.03] InsViE-1M: Effective Instruction-based Video Editing with Elaborate Dataset Construction  [[PDF](https://arxiv.org/abs/2503.20287),[Page](https://github.com/langmanbusi/InsViE)] ![Code](https://img.shields.io/github/stars/langmanbusi/InsViE?style=social&label=Star)\n\n[arxiv 2025.03] VACE: All-in-One Video Creation and Editing  [[PDF](https://arxiv.org/pdf/2503.07598),[Page](https://ali-vilab.github.io/VACE-Page/)] \n\n[arxiv 2025.03] VEGGIE: Instructional Editing and Reasoning Video Concepts with Grounded Generation  [[PDF](VEGGIE: Instructional Editing and Reasoning of Video Concepts with Grounded Generation),[Page](https://veggie-gen.github.io/)] ![Code](https://img.shields.io/github/stars/Yui010206/VEGGIE-VidEdit/?style=social&label=Star)\n\n[arxiv 2025.06] Many-for-Many: Unify the Training of Multiple Video and Image Generation and Manipulation Tasks  [[PDF](https://arxiv.org/abs/2506.01758),[Page](https://github.com/leeruibin/MfM)] ![Code](https://img.shields.io/github/stars/leeruibin/MfM?style=social&label=Star)\n\n[arxiv 2025.06]  UNIC: Unified In-Context Video Editing [[PDF](https://zixuan-ye.github.io/UNIC/YOUR_PAPER_LINK_HERE),[Page](https://zixuan-ye.github.io/UNIC/)] \n\n[arxiv 2025.07] OmniVCus: Feedforward Subject-driven Video Customization with Multimodal Control Conditions  [[PDF](http://arxiv.org/abs/2506.23361),[Page](https://caiyuanhao1998.github.io/project/OmniVCus/)] ![Code](https://img.shields.io/github/stars/caiyuanhao1998/Open-OmniVCus?style=social&label=Star)\n\n[arxiv 2025.08]  DreamVE: Unified Instruction-based Image and Video Editing [[PDF](https://arxiv.org/abs/2508.06080),[Page](https://zj-binxia.github.io/DreamVE-ProjectPage/)] \n\n[arxiv 2025.10] UniVideo: Unified Understanding, Generation, and Editing for Videos  [[PDF](https://arxiv.org/abs/2510.08377),[Page](https://congwei1230.github.io/UniVideo/)] ![Code](https://img.shields.io/github/stars/KlingTeam/UniVideo?style=social&label=Star)\n\n[arxiv 2025.10] Scaling Instruction-Based Video Editing with a High-Quality Synthetic Dataset  [[PDF](https://arxiv.org/abs/2510.15742),[Page](https://editto.net/)] ![Code](https://img.shields.io/github/stars/EzioBy/Ditto?style=social&label=Star)\n\n[arxiv 2025.12] ReViSE: Towards Reason-Informed Video Editing in Unified Models with Self-Reflective Learning  [[PDF](https://arxiv.org/abs/2512.09924),[Page](https://github.com/Liuxinyv/ReViSE)] ![Code](https://img.shields.io/github/stars/Liuxinyv/ReViSE?style=social&label=Star)\n\n[arxiv 2025.12] Kling-Omni Technical Report  [[PDF](https://arxiv.org/pdf/2512.16776),[Page](https://app.klingai.com/global/omni/new)] \n\n[arxiv 2025.12] Region-Constraint In-Context Generation for Instructional Video Editing  [[PDF](https://arxiv.org/abs/2512.17650),[Page](https://zhw-zhang.github.io/ReCo-page/)] ![Code](https://img.shields.io/github/stars/HiDream-ai/ReCo?style=social&label=Star)\n\n[arxiv 2026.01] OmniTransfer: All-in-one Framework for Spatio-temporal Video Transfer  [[PDF](https://arxiv.org/abs/2601.14250),[Page](https://pangzecheung.github.io/OmniTransfer/)] ![Code](https://img.shields.io/github/stars/PangzeCheung/OmniTransfer?style=social&label=Star)\n\n[arxiv 2026.02]  Tele-Omni: a Unified Multimodal Framework for Video Generation and Editing [[PDF](https://arxiv.org/pdf/2602.09609)]\n\n[arxiv 2026.02] Omni-Video 2: Scaling MLLM-Conditioned Diffusion for Unified Video Generation and Editing  [[PDF](https://arxiv.org/abs/2602.08820),[Page](https://howellyoung-s.github.io/Omni-Video2-project/)] ![Code](https://img.shields.io/github/stars/SAIS-FUXI/Omni-Video?style=social&label=Star)\n\n[arxiv 2026.03] Kiwi-Edit: Versatile Video Editing via Instruction and Reference Guidance  [[PDF](https://arxiv.org/abs/2603.02175),[Page](https://showlab.github.io/Kiwi-Edit/)] ![Code](https://img.shields.io/github/stars/showlab/Kiwi-Edit?style=social&label=Star)\n\n[arxiv 2026.03] NOVA: Sparse Control, Dense Synthesis for Pair-Free Video Editing  [[PDF](https://arxiv.org/abs/2603.02802),[Page](https://github.com/WeChatCV/NovaEdit)] ![Code](https://img.shields.io/github/stars/WeChatCV/NovaEdit?style=social&label=Star)\n\n[arxiv 2026.03] OmniWeaving: Towards Unified Video Generation with Free-form Composition and Reasoning  [[PDF](https://arxiv.org/abs/2603.24458),[Page](https://omniweaving.github.io.)]\n\n\n[arxiv 2026.03]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)\n\n\n\n\n\n\n\n## editing with video models \n[arxiv 2023.12]VIDiff: Translating Videos via Multi-Modal Instructions with Diffusion Models[[PDF](https://arxiv.org/abs/2311.18837),[Page](https://chenhsing.github.io/VIDiff)]\n\n[arxiv 2023.12]Neutral Editing Framework for Diffusion-based Video Editing [[PDF](https://arxiv.org/abs/2312.06708),[Page](https://neuedit.github.io/)]\n\n[arxiv 2024.01]FlowVid: Taming Imperfect Optical Flows for Consistent Video-to-Video Synthesis[[PDF](https://arxiv.org/abs/2312.17681),[Page](https://jeff-liangf.github.io/projects/flowvid/)]\n\n[arxiv 2024.02]UniEdit: A Unified Tuning-Free Framework for Video Motion and Appearance Editing [[PDF](https://arxiv.org/abs/2402.13185),[Page](https://jianhongbai.github.io/UniEdit/)]\n\n[arxiv 2024.02]Customize-A-Video: One-Shot Motion Customization of Text-to-Video Diffusion Models [[PDF](https://arxiv.org/abs/2402.14780),[Page](https://anonymous-314.github.io/)]\n\n[arxiv 2024.03]FastVideoEdit: Leveraging Consistency Models for Efficient Text-to-Video Editing[[PDF](https://arxiv.org/abs/2403.06269)]\n\n[arxiv 2024.03]DreamMotion: Space-Time Self-Similarity Score Distillation for Zero-Shot Video Editing [[PDF](https://arxiv.org/abs/2403.12002),[Page](https://hyeonho99.github.io/dreammotion/)]\n\n[arxiv 2024.03]EffiVED:Efficient Video Editing via Text-instruction Diffusion Models [[PDF](https://arxiv.org/abs/2403.11568)]\n\n[arxiv 2024.03]Videoshop: Localized Semantic Video Editing with Noise-Extrapolated Diffusion Inversion [[PDF](https://arxiv.org/abs/2403.14617),[Page](https://videoshop-editing.github.io/)]\n\n[arxiv 2024.03]AnyV2V: A Plug-and-Play Framework For Any Video-to-Video Editing Tasks  [[PDF](https://arxiv.org/pdf/2403.14468.pdf),[Page](https://tiger-ai-lab.github.io/AnyV2V/)]\n\n[arxiv 2024.04]Investigating the Effectiveness of Cross-Attention to Unlock Zero-Shot Editing of Text-to-Video Diffusion Models [[PDF](https://arxiv.org/abs/2404.05519)]\n\n[arxiv 2024.05]I2VEdit: First-Frame-Guided Video Editing via Image-to-Video Diffusion Models[[PDF](https://arxiv.org/abs/2405.16537),[Page](https://i2vedit.github.io/)]\n\n[arxiv 2024.05] Streaming Video Diffusion: Online Video Editing with Diffusion Models[[PDF](https://arxiv.org/abs/2405.1972),[Page](https://github.com/Chenfeng1271/SVDiff)]\n\n[arxiv 2024.06]Zero-Shot Video Editing through Adaptive Sliding Score Distillation[[PDF](https://arxiv.org/abs/2406.04888),[Page](https://nips24videoedit.github.io/zeroshot_videoedit/)]\n\n[arxiv 2024.06]FRAG: Frequency Adapting Group for Diffusion Video Editing[[PDF](https://arxiv.org/abs/2406.06044)]\n\n[arxiv 2024.07] Fine-gained Zero-shot Video Sampling[[PDF](https://arxiv.org/pdf/2407.21475),[Page](https://densechen.github.io/zss/)]\n\n[arxiv 2024.09] DNI: Dilutional Noise Initialization for Diffusion Video Editing [[PDF](https://arxiv.org/abs/2409.13037)]\n\n[arxiv 2024.10]FreeMask: Rethinking the Importance of Attention Masks for Zero-Shot Video Editing[[PDF](https://arxiv.org/abs/2409.20500),[Page](https://freemask-edit.github.io/)]\n\n[arxiv 2024.11] StableV2V: Stablizing Shape Consistency in Video-to-Video Editing [[PDF](https://arxiv.org/abs/2411.11045),[Page](https://alonzoleeeooo.github.io/StableV2V)]\n\n[arxiv 2024.11] VIRES: Video Instance Repainting with Sketch and Text Guidance  [[PDF](https://arxiv.org/abs/2411.16199)]\n\n[arxiv 2024.11] VideoDirector: Precise Video Editing via Text-to-Video Models  [[PDF](https://arxiv.org/abs/2411.17592),[Page](https://anonymous.4open.science/w/c4KzqAbCaz89o0FeWkdya/)] \n\n[arxiv 2024.11] AutoVFX: Physically Realistic Video Editing from Natural Language Instructions [[PDF](https://arxiv.org/abs/2411.02394),[Page](https://haoyuhsu.github.io/autovfx-website/)]\n\n[arxiv 2024.12] MoViE: Mobile Diffusion for Video Editing  [[PDF](https://arxiv.org/abs/2412.06578)]\n\n[arxiv 2024.12] Re-Attentional Controllable Video Diffusion Editing  [[PDF](https://arxiv.org/abs/2412.11710),[Page](https://github.com/mdswyz/ReAtCo)] ![Code](https://img.shields.io/github/stars/mdswyz/ReAtCo?style=social&label=Star)\n\n[arxiv 2024.12] MIVE: New Design and Benchmark for Multi-Instance Video Editing  [[PDF](https://arxiv.org/abs/2412.12877),[Page](https://kaist-viclab.github.io/mive-site/)]\n\n[arxiv 2024.12] AniGS: Animatable Gaussian Avatar from a Single Image with Inconsistent Gaussian Reconstruction  [[PDF](https://arxiv.org/pdf/2412.02684),[Page](https://lingtengqiu.github.io/2024/AniGS/)] ![Code](https://img.shields.io/github/stars/aigc3d/AniGS?style=social&label=Star)\n\n[arxiv 2025.01] Generative Video Propagation  [[PDF](https://arxiv.org/abs/2412.19761),[Page](https://genprop.github.io//)]\n\n[arxiv 2025.02]  DynVFX: Augmenting Real Videos with Dynamic Content [[PDF](https://arxiv.org/abs/2502.03621),[Page](https://dynvfx.github.io/)] \n\n[arxiv 2025.02]  Animate Anyone 2: High-Fidelity Character Image Animation with Environment Affordance [[PDF](https://arxiv.org/abs/2502.06145),[Page](https://humanaigc.github.io/animate-anyone-2/)] \n\n[arxiv 2025.02] VFX Creator: Animated Visual Effect Generation with Controllable Diffusion Transformer  [[PDF](https://arxiv.org/abs/2502.05979),[Page](https://vfx-creator0.github.io/)] \n\n[arxiv 2025.02]  AdaFlow: Efficient Long Video Editing via Adaptive Attention Slimming And Keyframe Selection [[PDF](https://arxiv.org/abs/2502.05433),[Page](https://github.com/jidantang55/AdaFlow)] ![Code](https://img.shields.io/github/stars/jidantang55/AdaFlow?style=social&label=Star)\n\n[arxiv 2025.02]  VideoGrain: Modulating Space-Time Attention for Multi-Grained Video Editing [[PDF](https://arxiv.org/abs/2502.17258),[Page](https://knightyxp.github.io/VideoGrain_project_page/)] ![Code](https://img.shields.io/github/stars/knightyxp/VideoGrain?style=social&label=Star)\n\n[arxiv 2025.03] SwapAnyone: Consistent and Realistic Video Synthesis for Swapping Any Person into Any Video  [[PDF](https://arxiv.org/abs/2503.09154),[Page](https://github.com/PKU-YuanGroup/SwapAnyone)] ![Code](https://img.shields.io/github/stars/PKU-YuanGroup/SwapAnyone?style=social&label=Star)\n\n[arxiv 2025.04] Understanding Attention Mechanism in Video Diffusion Models  [[PDF](https://arxiv.org/pdf/2504.12027)]\n\n[arxiv 2025.04] Visual Prompting for One-shot Controllable Video Editing without Inversion  [[PDF](https://arxiv.org/abs/2504.14335),[Page](https://vp4video-editing.github.io/)] \n\n[arxiv 2025.04]  Towards Generalized and Training-Free Text-Guided Semantic Manipulation [[PDF](https://arxiv.org/abs/2504.17269)]\n\n[arxiv 2025.05]  DAPE: Dual-Stage Parameter-Efficient Fine-Tuning for Consistent Video Editing with Diffusion Models [[PDF](https://arxiv.org/abs/2505.07057),[Page](https://junhaoooxia.github.io/DAPE.github.io/)] ![Code](https://img.shields.io/github/stars/junhaoooxia/DAPE.github.io?style=social&label=Star)\n\n[arxiv 2025.06] FlowDirector: Training-Free Flow Steering for Precise Text-to-Video Editing  [[PDF](https://arxiv.org/abs/2506.05046),[Page](https://flowdirector-edit.github.io/)] ![Code](https://img.shields.io/github/stars/Westlake-AGI-Lab/FlowDirector?style=social&label=Star)\n\n[arxiv 2025.06] FADE: Frequency-Aware Diffusion Model Factorization for Video Editing  [[PDF](https://arxiv.org/abs/2506.05934),[Page](https://github.com/EternalEvan/FADE)] ![Code](https://img.shields.io/github/stars/EternalEvan/FADE?style=social&label=Star)\n\n[arxiv 2025.06] LoRA-Edit: Controllable First-Frame-Guided Video Editing via Mask-Aware LoRA Fine-Tuning  [[PDF](https://arxiv.org/abs/2506.10082),[Page](https://cjeen.github.io/LoraEditPaper/)] ![Code](https://img.shields.io/github/stars/cjeen/LoRAEdit?style=social&label=Star)\n\n[arxiv 2025.06] UniRelight: Learning Joint Decomposition and Synthesis for Video Relighting  [[PDF](https://arxiv.org/abs/2506.15673),[Page](https://research.nvidia.com/labs/toronto-ai/UniRelight/)] \n\n[arxiv 2025.06] DFVEdit: Conditional Delta Flow Vector for Zero-shot Video Editing  [[PDF](https://arxiv.org/pdf/2506.20967),[Page](https://dfvedit.github.io/)] ![Code](https://img.shields.io/github/stars/LinglingCai0314/DFVEdit?style=social&label=Star)\n\n[arxiv 2025.06] Shape-for-Motion: Precise and Consistent Video Editing with 3D Proxy  [[PDF](https://arxiv.org/pdf/2506.22432),[Page](https://shapeformotion.github.io/)] ![Code](https://img.shields.io/github/stars/yuhaoliu7456/Shape-for-Motion?style=social&label=Star)\n\n[arxiv 2025.08] DreamSwapV: Mask-guided Subject Swapping for Any Customized Video Editing  [[PDF](https://arxiv.org/pdf/2508.14465)]\n\n[arxiv 2025.08]  Lumen: Consistent Video Relighting and Harmonious Background Replacement with Video Generative Models [[PDF](https://arxiv.org/abs/2508.12945),[Page](https://lumen-relight.github.io/)] ![Code](https://img.shields.io/github/stars/Kunbyte-AI/Lumen?style=social&label=Star)\n\n[arxiv 2025.09] ANYPORTAL: Zero-Shot Consistent Video Background Replacement  [[PDF](https://arxiv.org/abs/2509.07472),[Page](https://gaowenshuo.github.io/AnyPortal/)] ![Code](https://img.shields.io/github/stars/gaowenshuo/AnyPortalCode?style=social&label=Star)\n\n[arxiv 2025.09] EditVerse: Unifying Image and Video Editing and Generation with In-Context Learning  [[PDF](https://arxiv.org/abs/2509.20360),[Page](http://editverse.s3-website-us-east-1.amazonaws.com/)] \n\n[arxiv 2025.09] Taming Flow-based I2V Models for Creative Video Editing  [[PDF](https://arxiv.org/abs/2509.21917)]\n\n[arxiv 2025.10] Streaming Drag-Oriented Interactive Video Manipulation: Drag Anything, Anytime!  [[PDF](https://arxiv.org/pdf/2510.03550)]\n\n[arxiv 2025.10] InstructX: Towards Unified Visual Editing with MLLM Guidance  [[PDF](https://arxiv.org/abs/2510.08485),[Page](https://mc-e.github.io/project/InstructX/)] ![Code](https://img.shields.io/github/stars/MC-E/InstructX?style=social&label=Star)\n\n[arxiv 2025.10] VALA: Learning Latent Anchors for Training-Free and Temporally Consistent  [[PDF](https://arxiv.org/abs/2510.22970)]\n\n[arxiv 2025.11] MotionV2V: Editing Motion in a Video  [[PDF](https://arxiv.org/abs/2511.20640),[Page](https://ryanndagreat.github.io/MotionV2V/)] ![Code](https://img.shields.io/github/stars/RyannDaGreat/MotionV2V?style=social&label=Star)\n\n[arxiv 2025.12] IC-Effect: Precise and Efficient Video Effects Editing via In-Context Learning  [[PDF](https://arxiv.org/abs/2512.15635),[Page](https://cuc-mipg.github.io/IC-Effect/)] ![Code](https://img.shields.io/github/stars/CUC-MIPG/IC-Effect?style=social&label=Star)\n\n[arxiv 2025.12] EasyV2V: A High-quality Instruction-based Video Editing Framework  [[PDF](https://arxiv.org/abs/2512.16920),[Page](https://snap-research.github.io/easyv2v/)] \n\n[arxiv 2025.12]  InsertAnywhere: Bridging 4D Scene Geometry and Diffusion Models for Realistic Video Object Insertion [[PDF](https://arxiv.org/abs/2512.17504),[Page](https://myyzzzoooo.github.io/InsertAnywhere/)] ![Code](https://img.shields.io/github/stars/myyzzzoooo/InsertAnywhere?style=social&label=Star)\n\n[arxiv 2026.01]  Tuning-free Visual Effect Transfer across Videos [[PDF](https://arxiv.org/abs/2601.07833),[Page](https://tuningfreevisualeffects-maker.github.io/Tuning-free-Visual-Effect-Transfer-across-Videos-Project-Page/)] \n\n[arxiv 2026.01] EditYourself: Audio-Driven Generation and Manipulation of Talking Head Videos with Diffusion Transformers  [[PDF](https://arxiv.org/abs/2601.22127),[Page](https://edit-yourself.github.io/)] \n\n[arxiv 2026.01] CineScene: Implicit 3D as Effective Scene Representation for Cinematic Video Generation  [[PDF](https://arxiv.org/pdf/2602.06959),[Page](https://karine-huang.github.io/CineScene/)] \n\n[arxiv 2026.02] EditCtrl: Disentangled Local and Global Control for Real-Time Generative Video Editing  [[PDF](http://arxiv.org/abs/2602.15031),[Page](https://yehonathanlitman.github.io/edit_ctrl/)] ![Code](https://img.shields.io/github/stars/yehonathanlitman/EditCtrl?style=social&label=Star)\n\n[arxiv 2026.02] ChordEdit: One-Step Low-Energy Transport for Image Editing  [[PDF](https://arxiv.org/abs/2602.19083),[Page](https://chordedit.github.io/)] ![Code](https://img.shields.io/github/stars/ChordEdit/ChordEdit?style=social&label=Star)\n\n[arxiv 2026.03] When to Lock Attention: Training-Free KV Control in Video Diffusion  [[PDF](https://arxiv.org/pdf/2603.09657)]\n\n[arxiv 2026.03] ViFeEdit: A Video-Free Tuner of Your Video Diffusion Transformer  [[PDF](https://arxiv.org/abs/2603.15478)] ![Code](https://img.shields.io/github/stars/Lexie-YU/ViFeEdit?style=social&label=Star)\n\n[arxiv 2026.03] SAMA: Factorized Semantic Anchoring and Motion Alignment for Instruction-Guided Video Editing  [[PDF](https://arxiv.org/abs/2603.19228),[Page](https://cynthiazxy123.github.io/SAMA/)] ![Code](https://img.shields.io/github/stars/Cynthiazxy123/SAMA?style=social&label=Star)\n\n[arxiv 2026.03] TRACE: Object Motion Editing in Videos with First-Frame Trajectory Guidance  [[PDF](https://arxiv.org/abs/2603.25707),[Page](https://trace-motion.github.io/)]\n\n[arxiv 2026.03] AVControl: Efficient Framework for Training Audio-Visual Controls  [[PDF](https://arxiv.org/abs/2603.24793),[Page](https://matanby.github.io/AVControl/)] \n\n[arxiv 2026.03]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)\n\n\n## Editing with image model \n*[arxiv 2022.12]Tune-A-Video: One-Shot Tuning of Image Diffusion Models for Text-to-Video Generation [[PDF](https://arxiv.org/abs/2212.11565), [Page](https://tuneavideo.github.io/)]\n\n[arxiv 2023.03]Video-P2P: Video Editing with Cross-attention Control [[PDF](https://arxiv.org/abs/2303.04761), [Page](https://video-p2p.github.io/)]\n\n[arxiv 2023.03]Edit-A-Video: Single Video Editing with Object-Aware Consistency [[PDF](https://arxiv.org/abs/2303.07945), [Page](https://edit-a-video.github.io/)]\n\n[arxiv 2023.03]FateZero: Fusing Attentions for Zero-shot Text-based Video Editing [[PDF](https://arxiv.org/abs/2303.09535), [Page](https://github.com/ChenyangQiQi/FateZero)]\n\n[arxiv 2023.03]Pix2Video: Video Editing using Image Diffusion [[PDF](https://arxiv.org/abs/2303.12688)]\n\n->[arxiv 2023.03]Text2Video-Zero: Text-to-Image Diffusion Models are Zero-Shot Video Generators [[PDF](https://arxiv.org/abs/2303.13439), [code](https://github.com/Picsart-AI-Research/Text2Video-Zero)]\n\n[arxiv 2023.03]Zero-Shot Video Editing Using Off-The-Shelf Image Diffusion Models[[PDF](https://arxiv.org/abs/2303.17599),[code](https://github.com/baaivision/vid2vid-zero)]\n\n[arxiv 2023.04]Follow Your Pose: Pose-Guided Text-to-Video Generation using Pose-Free Videos[[PDF](https://arxiv.org/abs/2304.01186)]\n\n[arxiv 2023.05]ControlVideo: Training-free Controllable Text-to-Video Generation [[PDF](https://arxiv.org/abs/2305.13077), [Page](https://github.com/YBYBZhang/ControlVideo)]\n\n[arxiv 2023.05]Control-A-Video: Controllable Text-to-Video Generation with Diffusion Models[[PDF](https://arxiv.org/abs/2305.13840), [Page](https://controlavideo.github.io/)]\n\n[arxiv-2023.05]Large Language Models are Frame-level Directors for Zero-shot Text-to-Video Generation [[PDF](https://arxiv.org/abs/2305.14330), [Page](https://github.com/KU-CVLAB/DirecT2V)]\n\n[arxiv 2023.05]Video ControlNet: Towards Temporally Consistent Synthetic-to-Real Video Translation Using Conditional Image Diffusion Models [[PDF](https://arxiv.org/abs/2305.19193)]\n\n[arxiv 2023.05]SAVE: Spectral-Shift-Aware Adaptation of Image Diffusion Models for Text-guided Video Editing [[PDF](https://arxiv.org/abs/2305.18670)]\n\n[arxiv 2023.05]InstructVid2Vid: Controllable Video Editing with Natural Language Instructions [[PDF](https://arxiv.org/abs/2305.12328)]\n\n[arxiv 2023.05] ControlVideo: Adding Conditional Control for One Shot Text-to-Video Editing [[PDF](https://arxiv.org/pdf/2305.17098.pdf), [Page](https://ml.cs.tsinghua.edu.cn/controlvideo/)]\n\n[arxiv 2023.05]Gen-L-Video: Multi-Text to Long Video Generation via Temporal Co-Denoising [[PDF](https://arxiv.org/abs/2305.18264),[Page](https://g-u-n.github.io/projects/gen-long-video/index.html)]\n\n[arxiv 2023.06]Make-Your-Video: Customized Video Generation Using Textual and Structural Guidance [[PDF](https://arxiv.org/abs/2306.00943), [Page](https://doubiiu.github.io/projects/Make-Your-Video/)]\n\n[arxiv 2023.06]VidEdit: Zero-Shot and Spatially Aware Text-Driven Video Editing [[PDF](https://arxiv.org/abs/2306.08707),[Page](https://videdit.github.io/)]\n\n*[arxiv 2023.06]Rerender A Video: Zero-Shot Text-Guided Video-to-Video Translation [[PDF](https://arxiv.org/abs/2306.07954), [Page](https://anonymous-31415926.github.io/)]\n\n*[arxiv 2023.07]AnimateDiff: Animate Your Personalized Text-to-Image Diffusion Models without Specific Tuning [[PDF](https://arxiv.org/abs/2307.04725),  [Page](https://animatediff.github.io/)]\n\n*[arxiv 2023.07]TokenFlow: Consistent Diffusion Features for Consistent Video Editing [[PDF](https://arxiv.org/pdf/2307.10373.pdf),[Page](https://diffusion-tokenflow.github.io/)]\n\n[arxiv 2023.07]VideoControlNet: A Motion-Guided Video-to-Video Translation Framework by Using Diffusion Model with ControlNet [[PDF](https://arxiv.org/pdf/2307.14073.pdf), [Page](https://vcg-aigc.github.io/)]\n\n[arxiv 2023.08]CoDeF: Content Deformation Fields for Temporally Consistent Video Processing [[PDF](https://arxiv.org/pdf/2308.07926.pdf), [Page](https://qiuyu96.github.io/CoDeF/)]\n\n[arxiv 2023.08]DragNUWA: Fine-grained Control in Video Generation by Integrating Text, Image, and Trajectory [[PDF](https://arxiv.org/abs/2308.08089), [Page](https://www.microsoft.com/en-us/research/project/dragnuwa/)]\n\n[arxiv 2023.08]StableVideo: Text-driven Consistency-aware Diffusion Video Editing [[PDF](https://arxiv.org/abs/2308.09592), [Page](https://github.com/rese1f/StableVideo)]\n\n[arxiv 2023.08]Edit Temporal-Consistent Videos with Image Diffusion Model [[PDF](https://arxiv.org/abs/2308.09091)]\n\n[arxiv 2023.08]EVE: Efficient zero-shot text-based Video Editing with Depth Map Guidance and Temporal Consistency Constraints [[PDF](https://arxiv.org/pdf/2308.10648.pdf)]\n\n[arxiv 2023.08]MagicEdit: High-Fidelity and Temporally Coherent Video Editing [[PDF](https://arxiv.org/pdf/2308.14749), [Page](https://magic-edit.github.io/)]\n\n[arxiv 2023.09]MagicProp: Diffusion-based Video Editing via Motionaware Appearance Propagation[[PDF](https://arxiv.org/pdf/2309.00908.pdf)]\n\n[arxiv 2023.09]Free-Bloom: Zero-Shot Text-to-Video Generator with LLM Director and LDM Animator[[PDF](https://arxiv.org/abs/2309.14494), [Page](https://github.com/SooLab/Free-Bloom)]\n\n[arxiv 2023.09]CCEdit: Creative and Controllable Video Editing via Diffusion Models [[PDF](https://arxiv.org/abs/2309.16496)]\n\n[arxiv 2023.10]Ground-A-Video: Zero-shot Grounded Video Editing using Text-to-image Diffusion Models [[PDF](https://arxiv.org/abs/2310.01107),[Page](https://ground-a-video.github.io/)]\n\n[arxiv 2023.10]FLATTEN: optical FLow-guided ATTENtion for consistent text-to-video editing [[PDF](https://arxiv.org/abs/2310.05922),[Page](https://flatten-video-editing.github.io/)]\n\n[arxiv 2023.10]ConditionVideo: Training-Free Condition-Guided Text-to-Video Generation [[PDF](https://arxiv.org/abs/2310.07697),[Page](https://pengbo807.github.io/conditionvideo-website/)]\n\n[arxiv 2023.10, nerf] DynVideo-E: Harnessing Dynamic NeRF for Large-Scale Motion- and View-Change Human-Centric Video Editing [[PDF](https://arxiv.org/abs/2310.10624), [Page](https://showlab.github.io/DynVideo-E/)]\n\n[arxiv 2023.10]LAMP: Learn A Motion Pattern for Few-Shot-Based Video Generation [[PDF](https://arxiv.org/abs/2310.10769),[Page](https://rq-wu.github.io/projects/LAMP/index.html)]\n\n[arxiv 2023.11]LATENTWARP: CONSISTENT DIFFUSION LATENTS FOR ZERO-SHOT VIDEO-TO-VIDEO TRANSLATION [[PDF](https://arxiv.org/pdf/2311.00353.pdf)]\n\n[arxiv 2023.11]Cut-and-Paste: Subject-Driven Video Editing with Attention Control[[PDF](https://arxiv.org/abs/2311.11697)]\n\n[arxiv 2023.11]MotionZero:Exploiting Motion Priors for Zero-shot Text-to-Video Generation [[PDF](https://arxiv.org/abs/2311.16635)]\n\n[arxiv 2023.12]Motion-Conditioned Image Animation for Video Editing [[PDF](https://arxiv.org/pdf/2311.18827.pdf), [Page](https://facebookresearch.github.io/MoCA/)]\n\n[arxiv 2023.12]RAVE: Randomized Noise Shuffling for Fast and Consistent Video Editing with Diffusion Models [[PDF](https://arxiv.org/abs/2312.04524),[Page](https://rave-video.github.io/)]\n\n[arxiv 2023.12]DiffusionAtlas: High-Fidelity Consistent Diffusion Video Editing [[PDF](https://arxiv.org/abs/2312.03772)]\n\n[arxiv 2023.12]MagicStick: Controllable Video Editing via Control Handle Transformations [[PDF](https://arxiv.org/abs/2312.03047),[Page](https://github.com/mayuelala/MagicStick)]\n\n[arxiv 2023.12]SAVE: Protagonist Diversification with Structure Agnostic Video Editing [[PDF](https://arxiv.org/abs/2312.02503),[Page](https://ldynx.github.io/SAVE/)]\n\n[arxiv 2023.12]VidToMe: Video Token Merging for Zero-Shot Video Editing [[PDF](https://arxiv.org/abs/2312.10656),[Page](https://vidtome-diffusion.github.io/)]\n\n[arxiv 2023.12]Fairy: Fast Parallelized Instruction-Guided Video-to-Video Synthesis [[PDF](https://arxiv.org/abs/2312.13834),[Page](https://fairy-video2video.github.io/)]\n\n[arxiv 2024.1]Object-Centric Diffusion for Efficient Video Editing [[PDF](https://arxiv.org/abs/2401.05735)]\n\n[arxiv 2024.1]VASE: Object-Centric Shape and Appearance Manipulation of Real Videos [[PDF](https://arxiv.org/abs/2401.02473),[Page](https://helia95.github.io/vase-website/)]\n\n[arxiv 2024.03]FRESCO: Spatial-Temporal Correspondence for Zero-Shot Video Translation [[PDF](https://arxiv.org/abs/2403.12962),[Page](https://www.mmlab-ntu.com/project/fresco/)]\n\n[arxiv 2024.04]GenVideo: One-shot Target-image and Shape Aware Video Editing using T2I Diffusion Models [[PDF](https://arxiv.org/abs/2404.12541)]\n\n[arxiv 2024.05]Edit-Your-Motion: Space-Time Diffusion Decoupling Learning for Video Motion Editing [[PDF](https://arxiv.org/abs/2405.04496),[Page](https://github.com/yiiizuo/Edit-Your-Motion)]\n\n[arxiv 2024.05] Slicedit: Zero-Shot Video Editing With Text-to-Image Diffusion Models Using Spatio-Temporal Slices  [[PDF](https://arxiv.org/abs/2405.12211),[Page](https://matankleiner.github.io/slicedit/)]\n\n[arxiv 2024.05] Looking Backward: Streaming Video-to-Video Translation with Feature Banks [[PDF](https://arxiv.org/abs/2405.15757),[Page](https://jeff-liangf.github.io/projects/streamv2v)]\n\n[arxiv 2024.06]Follow-Your-Pose v2: Multiple-Condition Guided Character Image Animation for Stable Pose Control [[PDF](https://arxiv.org/abs/2406.03035)]\n\n[arxiv 2024.06]NaRCan: Natural Refined Canonical Image with Integration of Diffusion Prior for Video Editing[[PDF](https://arxiv.org/abs/2406.06523),[Page](https://koi953215.github.io/NaRCan_page/)]\n\n[arxiv 2024.06]VIA: A Spatiotemporal Video Adaptation Framework for Global and Local Video Editing [[PDF](https://arxiv.org/abs/2406.12831),[Page](https://via-video.github.io/)]\n\n[arxiv 2024.10] L-C4: Language-Based Video Colorization for Creative and Consistent Color [[PDF](https://arxiv.org/abs/2410.04972)] \n\n[arxiv 2024.10] HARIVO: Harnessing Text-to-Image Models for Video Generation [[PDF](https://kwonminki.github.io/HARIVO/),[Page](https://kwonminki.github.io/HARIVO/)] \n\n[arxiv 2024.12] DIVE: Taming DINO for Subject-Driven Video Editing  [[PDF](https://arxiv.org/abs/2412.03347),[Page](https://dino-video-editing.github.io/)] \n\n[arxiv 2026.03]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)\n\n\n[arxiv 2026.03]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)\n\n\n\n\n## Completion (animation, interpolation, prediction)\n[arxiv 2022; Meta] Tell Me What Happened: Unifying Text-guided Video Completion via Multimodal Masked Video Generation \\[[PDF](https://arxiv.org/pdf/2211.12824.pdf), code]\n\n[arxiv 2023.03]LDMVFI: Video Frame Interpolation with Latent Diffusion Models[[PDF](https://arxiv.org/abs/2303.09508)]\n\n*[arxiv 2023.03]Seer: Language Instructed Video Prediction with Latent Diffusion Models [[PDF](https://arxiv.org/abs/2303.14897)]\n\n\n[arxiv 2024.12]  Extracting Motion and Appearance via Inter-Frame Attention for Efficient Video Frame Interpolation\n [[PDF](https://arxiv.org/abs/2303.00440),[Page](https://github.com/MCG-NJU/EMA-VFI?tab=readme-ov-file)] ![Code](https://img.shields.io/github/stars/MCG-NJU/EMA-VFI?style=social&label=Star)\n\n[arxiv 2023.10]DynamiCrafter: Animating Open-domain Images with Video Diffusion Priors [[PDF](https://arxiv.org/abs/2310.12190), [Page](https://github.com/AILab-CVC/VideoCrafter)]\n\n[arxiv 2024.03]Explorative Inbetweening of Time and Space [[PDF](https://time-reversal.github.io/),[Page](https://time-reversal.github.io/)]\n\n[arxiv 2024.04]Video Interpolation With Diffusion Models [[PDF](https://arxiv.org/abs/2404.01203),[Page](https://vidim-interpolation.github.io/)]\n\n[arxiv 2024.04]Sparse Global Matching for Video Frame Interpolation with Large Motion [[PDF](https://arxiv.org/abs/2404.06913),[Page](https://sgm-vfi.github.io/)]\n\n[arxiv 2024.04]LADDER: An Efficient Framework for Video Frame Interpolation [[PDF](https://arxiv.org/abs/2404.11108)]\n\n[arxiv 2024.04]Motion-aware Latent Diffusion Models for Video Frame Interpolation [[PDF](https://arxiv.org/abs/2404.13534)]\n\n[arxiv 2024.04]Event-based Video Frame Interpolation with Edge Guided Motion Refinement [[PDF](https://arxiv.org/abs/2404.18156)]\n\n[arxiv 2024.04]StoryDiffusion: Consistent Self-Attention for Long-Range Image and Video Generation [[PDF](https://arxiv.org/abs/2405.01434),[Page](https://github.com/HVision-NKU/StoryDiffusion)]\n\n[arxiv 2024.04]Frame Interpolation with Consecutive Brownian Bridge Diffusion[[PDF](https://arxiv.org/abs/2405.05953),[Page](https://zonglinl.github.io/videointerp/)]\n\n[arxiv 2024.05]ToonCrafter: Generative Cartoon Interpolation [[PDF](https://arxiv.org/abs/2405.17933),[Page](https://doubiiu.github.io/projects/ToonCrafter/)]\n\n[arxiv 2024.06]Disentangled Motion Modeling for Video Frame Interpolation [[PDF](https://arxiv.org/abs/2406.17256),[Page](https://github.com/JHLew/MoMo)]\n\n[arxiv 2024.07] VFIMamba: Video Frame Interpolation with State Space Models [[PDF](https://arxiv.org/abs/2407.02315),[Page](https://github.com/MCG-NJU/VFIMamba)]\n\n[arxiv 2024.08] Generative Inbetweening: Adapting Image-to-Video Models for Keyframe Interpolation[[PDF](https://arxiv.org/abs/2408.15239),[Page](https://svd-keyframe-interpolation.github.io/)]\n\n[arxiv 2024.10] High-Resolution Frame Interpolation with Patch-based Cascaded Diffusion [[PDF](https://arxiv.org/abs/2410.11838),[Page](https://hifi-diffusion.github.io/)]\n\n[arxiv 2024.10] Framer: Interactive Frame Interpolation [[PDF](https://arxiv.org/abs/2410.18978),[Page](https://aim-uofa.github.io/Framer/)]\n\n[arxiv 2024.12] Advanced Video Inpainting Using Optical Flow-Guided Efficient Diffusion  [[PDF](https://arxiv.org/pdf/2412.00857)]\n\n[arxiv 2024.12] Elevating Flow-Guided Video Inpainting with Reference Generation  [[PDF](https://arxiv.org/abs/2412.08975),[Page](https://github.com/suhwan-cho/RGVI)] ![Code](https://img.shields.io/github/stars/suhwan-cho/RGVI?style=social&label=Star)\n\n[arxiv 2024.12] Generative Inbetweening through Frame-wise Conditions-Driven Video Generation  [[PDF](https://fcvg-inbetween.github.io/),[Page](https://fcvg-inbetween.github.io/)] ![Code](https://img.shields.io/github/stars/Tian-one/FCVG?style=social&label=Star)\n\n[arxiv 2025.01]  MoG: Motion-Aware Generative Frame Interpolation [[PDF](https://arxiv.org/abs/2501.03699),[Page](https://mcg-nju.github.io/MoG_Web/)] \n\n[arxiv 2025.02]  Seeing World Dynamics in a Nutshell [[PDF](https://arxiv.org/pdf/2502.03465),[Page](https://github.com/Nut-World/NutWorld)] ![Code](https://img.shields.io/github/stars/Nut-World/NutWorld?style=social&label=Star)\n\n\n[arxiv 2025.02] Event-based Video Frame Interpolation with Cross-Modal Asymmetric Bidirectional Motion Fields  [[PDF](https://openaccess.thecvf.com/content/CVPR2023/papers/Kim_Event-Based_Video_Frame_Interpolation_With_Cross-Modal_Asymmetric_Bidirectional_Motion_Fields_CVPR_2023_paper.pdf),[Page](https://github.com/intelpro/CBMNet)] ![Code](https://img.shields.io/github/stars/intelpro/CBMNet?style=social&label=Star)\n\n[arxiv 2025.03] VideoPainter: Any-length Video Inpainting and Editing with Plug-and-Play Context Control  [[PDF](https://arxiv.org/abs/2503.05639),[Page](https://yxbian23.github.io/project/video-painter/)] ![Code](https://img.shields.io/github/stars/TencentARC/VideoPainter?style=social&label=Star)\n\n[arxiv 2025.03] MTV-Inpaint: Multi-Task Long Video Inpainting  [[PDF](https://arxiv.org/pdf/2503.11412),[Page](https://mtv-inpaint.github.io/)] ![Code](https://img.shields.io/github/stars/ysy31415/MTV-Inpaint?style=social&label=Star)\n\n[arxiv 2025.03] EDEN: Enhanced Diffusion for High-quality Large-motion Video Frame Interpolation  [[PDF](https://arxiv.org/abs/2503.15831),[Page](https://github.com/bbldCVer/EDEN)] ![Code](https://img.shields.io/github/stars/bbldCVer/EDEN?style=social&label=Star)\n\n[arxiv 2025.03]  EGVD: Event-Guided Video Diffusion Model for Physically Realistic Large-Motion Frame Interpolation [[PDF](https://arxiv.org/abs/2503.20268),[Page](https://github.com/OpenImagingLab/EGVD)] ![Code](https://img.shields.io/github/stars/OpenImagingLab/EGVD?style=social&label=Star)\n\n[arxiv 2025.04] Hierarchical Flow Diffusion for Efficient Frame Interpolation  [[PDF](https://arxiv.org/abs/2504.00380),[Page](https://hfd-interpolation.github.io/)] \n\n\n[arxiv 2025.04]  Time-adaptive Video Frame Interpolation based on Residual Diffusion [[PDF](https://arxiv.org/abs/2504.05402)]\n\n[arxiv 2025.05] TimeTracker: Event-based Continuous Point Tracking for Video Frame Interpolation with Non-linear Motion  [[PDF](https://arxiv.org/abs/2505.03116)】\n\n[arxiv 2025.06]  Controllable Human-centric Keyframe Interpolation with Generative Prior [[PDF](https://arxiv.org/abs/2506.03119),[Page](https://gseancdat.github.io/projects/PoseFuse3D_KI)] ![Code](https://img.shields.io/github/stars/GSeanCDAT/PoseFuse3D-KI?style=social&label=Star)\n\n[arxiv 2025.07] Semantic Frame Interpolation  [[PDF](https://arxiv.org/pdf/2507.05173),[Page](https://hyj542682306.github.io/sfi/)] ![Code](https://img.shields.io/github/stars/hyj542682306/Semantic-Frame-Interpolation?style=social&label=Star)\n\n[arxiv 2025.07]TLB-VFI: Temporal-Aware Latent Brownian Bridge Diffusion for Video Frame Interpolation   [[PDF](https://arxiv.org/abs/2507.04984)]\n\n[arxiv 2025.10]  Arbitrary Generative Video Interpolation [[PDF](https://arxiv.org/abs/2510.00578),[Page](https://mcg-nju.github.io/ArbInterp-Web/)] \n\n[arxiv 2025.10]  MultiCOIN: Multi-Modal COntrollable Video INbetweening [[PDF](https://arxiv.org/abs/2510.08561),[Page](https://multicoinx.github.io/multicoin/)] \n\n[arxiv 2026.03]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)\n\n\n\n\n\n## style transfer \n[arxiv 2023.06]Probabilistic Adaptation of Text-to-Video Models [[PDF](https://arxiv.org/abs/2306.01872)]\n\n[arxiv 2023.11]Highly Detailed and Temporal Consistent Video Stylization via Synchronized Multi-Frame Diffusion[[PDF](https://arxiv.org/abs/2311.14343)]\n\n[arxiv 2023.12]StyleCrafter: Enhancing Stylized Text-to-Video Generation with Style Adapter[[PDF](https://arxiv.org/abs/2312.00330),[Page](https://gongyeliu.github.io/StyleCrafter.github.io/)]\n\n[arxiv 2023.12]DragVideo: Interactive Drag-style Video Editing [[PDF](https://arxiv.org/abs/2312.02216)]\n\n[arxiv 2024.03]FRESCO: Spatial-Temporal Correspondence for Zero-Shot Video Translation [[PDF](https://arxiv.org/abs/2403.12962),[Page](https://github.com/williamyang1991/fresco)]\n\n[arxiv 2024.10] UniVST: A Unified Framework for Training-free Localized Video Style Transfer [[PDF](https://arxiv.org/abs/2410.20084)]\n\n[arxiv 2024.12]  StyleMaster: Stylize Your Video with Artistic Generation and Translation [[PDF](https://arxiv.org/abs/2412.07744),[Page](https://zixuan-ye.github.io/stylemaster)] ![Code](https://img.shields.io/github/stars/KwaiVGI/StyleMaster?style=social&label=Star)\n\n[arxiv 2025.03]  SOYO: A Tuning-Free Approach for Video Style Morphing via Style-Adaptive Interpolation in Diffusion Models [[PDF](https://arxiv.org/pdf/2503.06998)]\n\n[arxiv 2025.06]  Dreamland: Controllable World Creation with Simulator and Generative Models [[PDF](https://www.arxiv.org/abs/2506.08006),[Page](https://metadriverse.github.io/dreamland/)] \n\n[arxiv 2025.10] FreeViS: Training-free Video Stylization with Inconsistent References  [[PDF](https://arxiv.org/abs/2510.01686),[Page](https://xujiacong.github.io/FreeViS/)] ![Code](https://img.shields.io/github/stars/XuJiacong/FreeViS/?style=social&label=Star)\n\n[arxiv 2025.10] PickStyle: Video-to-Video Style Transfer with Context-Style Adapters  [[PDF](https://arxiv.org/abs/2510.07546),[Page](https://pickstyle.pickford.ai/)] ![Code](https://img.shields.io/github/stars/PickfordAI/pickstyle?style=social&label=Star)\n\n[arxiv 2026.01] DreamStyle: A Unified Framework for Video Stylization  [[PDF](https://arxiv.org/abs/2601.02785),[Page](https://lemonsky1995.github.io/dreamstyle/)] ![Code](https://img.shields.io/github/stars/LemonSky1995/DreamStyle?style=social&label=Star)\n\n[arxiv 2026.01] QwenStyle: Content-Preserving Style Transfer with Qwen-Image-Edit  [[PDF](https://arxiv.org/abs/2601.06202),[Page](https://github.com/witcherofresearch/Qwen-Image-Style-Transfer)] ![Code](https://img.shields.io/github/stars/witcherofresearch/Qwen-Image-Style-Transfer?style=social&label=Star)\n\n[arxiv 2026.01]  TeleStyle: Content-Preserving Style Transfer in Images and Videos [[PDF](https://arxiv.org/abs/2601.20175),[Page](https://tele-ai.github.io/TeleStyle/)] ![Code](https://img.shields.io/github/stars/Tele-AI/TeleStyle?style=social&label=Star)\n\n\n[arxiv 2026.03]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)\n\n\n## architecture/distribution\n[arxiv 2024.12] Efficient Continuous Video Flow Model for Video Prediction  [[PDF](https://arxiv.org/abs/2412.05633)]\n\n[arxiv 2025.02] Efficient-vDiT: Efficient Video Diffusion Transformers With Attention Tile  [[PDF](https://arxiv.org/abs/2502.06155)]\n\n[arxiv 2025.02] Next Block Prediction: Video Generation via Semi-Autoregressive Modeling  [[PDF](https://arxiv.org/abs/2502.07737),[Page](https://renshuhuai-andy.github.io/NBP-project/)] ![Code](https://img.shields.io/github/stars/RenShuhuai-Andy/NBP?style=social&label=Star)\n\n[arxiv 2025.10]  Uniform Discrete Diffusion with Metric Path for Video Generation [[PDF](https://arxiv.org/abs/2510.24717),[Page](https://bitterdhg.github.io/URSA_page/)] ![Code](https://img.shields.io/github/stars/baaivision/URSA?style=social&label=Star)\n\n[arxiv 2025.11] Fractional Diffusion Bridge Models  [[PDF](https://arxiv.org/pdf/2511.01795)]\n\n[arxiv 2025.12]  Inferix: A Block-Diffusion based Next-Generation Inference Engine for World Simulation [[PDF](https://arxiv.org/abs/2511.20714),[Page](https://github.com/alibaba-damo-academy/Inferix)] ![Code](https://img.shields.io/github/stars/alibaba-damo-academy/Inferix?style=social&label=Star)\n\n[arxiv 2026.01]  VideoAR: Autoregressive Video Generation via Next-Frame & Scale Prediction [[PDF](https://arxiv.org/abs/2601.05966),[Page]()] ![Code](https://img.shields.io/github/stars/DAGroup-PKU/ReVidgen/?style=social&label=Star)\n\n[arxiv 2026.01] Stable Velocity: A Variance Perspective on Flow Matching  [[PDF](https://arxiv.org/pdf/2602.05435),[Page](https://github.com/linYDTHU/StableVelocity)] ![Code](https://img.shields.io/github/stars/linYDTHU/StableVelocity?style=social&label=Star)\n\n[arxiv 2026.03]  Scale Space Diffusion [[PDF](https://arxiv.org/abs/2603.08709),[Page](https://prateksha.github.io/projects/scale-space-diffusion/)] ![Code](https://img.shields.io/github/stars/prateksha/ScaleSpaceDiffusion?style=social&label=Star)\n\n[arxiv 2026.03]  Reviving ConvNeXt for Efficient Convolutional Diffusion Models [[PDF](https://arxiv.org/abs/2603.09408),[Page](https://github.com/star-kwon/FCDM)] ![Code](https://img.shields.io/github/stars/star-kwon/FCDM?style=social&label=Star)\n\n[arxiv 2026.03]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)\n\n\n\n## embodied AI\n[arxiv 2026.01] Rethinking Video Generation Model for the Embodied World  [[PDF](https://arxiv.org/abs/2601.15282),[Page](https://dagroup-pku.github.io/ReVidgen.github.io/)] ![Code](https://img.shields.io/github/stars/DAGroup-PKU/ReVidgen/?style=social&label=Star)\n\n[arxiv 2026.03] Egocentric World Model for Photorealistic Hand-Object Interaction Synthesis  [[PDF](https://arxiv.org/abs/2603.13615)]\n\n[arxiv 2026.03] Persistent Robot World Models: Stabilizing Multi-Step Rollouts via Reinforcement Learning  [[PDF](https://arxiv.org/abs/2603.25685),[Page](https://jaibardhan.com/persistworld/)] ![Code](https://img.shields.io/github/stars/Jai2500/PersistWorld?style=social&label=Star)\n\n[arxiv 2026.03] ABot-PhysWorld: Interactive World Foundation Model for Robotic Manipulation with Physics Alignment  [[PDF](https://arxiv.org/abs/2603.23376)]\n\n\n[arxiv 2026.03]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)\n\n\n\n\n## Evaluation \n[arxiv 2023.10]EvalCrafter: Benchmarking and Evaluating Large Video Generation Models [[PDF](https://arxiv.org/abs/2310.11440),[Page](https://evalcrafter.github.io/)]\n\n[arxiv 2023.11]FETV: A Benchmark for Fine-Grained Evaluation of Open-Domain Text-to-Video Generation [[PDF](https://arxiv.org/abs/2311.01813)]\n\n[arxiv 2023.11]Online Video Quality Enhancement with Spatial-Temporal Look-up Tables [[PDF](https://arxiv.org/abs/2311.13616)]\n\n\n[ICCV 2023]Exploring Video Quality Assessment on User Generated Contents from Aesthetic and Technical Perspectives [[PDF](https://arxiv.org/abs/2211.04894),[Page](https://github.com/VQAssessment/DOVER)]\n\n[arxiv 2023.10]EvalCrafter: Benchmarking and Evaluating Large Video Generation Models[[PDF](https://arxiv.org/abs/2310.11440), [Page](https://arxiv.org/abs/2310.11440)]\n\n[arxiv 2023.11]HIDRO-VQA: High Dynamic Range Oracle for Video Quality Assessment [[PDF](https://arxiv.org/abs/2311.11059)]\n\n[arxiv 2023.12]VBench: Comprehensive Benchmark Suite for Video Generative Models [[PDF](https://arxiv.org/abs/2311.17982), [Page](https://vchitect.github.io/VBench-project/)]\n\n[arxiv 2024.02]Perceptual Video Quality Assessment: A Survey [[PDF](https://arxiv.org/abs/2402.03413)]\n\n[arxiv 2024.02]KVQ: Kaleidoscope Video Quality Assessment for Short-form Videos [[PDf](https://arxiv.org/abs/2402.07220)]\n\n[arxiv 2024.03]STREAM: Spatio-TempoRal Evaluation and Analysis Metric for Video Generative Models [[PDF](https://arxiv.org/abs/2403.09669)]\n\n[arxiv 2024.03]Modular Blind Video Quality Assessment [[PDF](https://arxiv.org/abs/2402.19276)]\n\n[arxiv 2024.03]Subjective-Aligned Dateset and Metric for Text-to-Video Quality Assessment [[PDF](https://arxiv.org/abs/2403.11956)]\n\n[arxiv 2024.06] GenAI Arena: An Open Evaluation Platform for Generative Models[[PDF](https://arxiv.org/abs/2406.04485),[Page](https://huggingface.co/spaces/TIGER-Lab/GenAI-Arena)]\n\n[arxiv 2024.06]VideoPhy: Evaluating Physical Commonsense for Video Generation [[PDF](http://arxiv.org/abs/2406.03520),[Page](https://videophy.github.io/)]\n\n[arxiv 2024.07]T2V-CompBench: A Comprehensive Benchmark for Compositional Text-to-video Generation [[PDF](https://arxiv.org/abs/2407.14505),[Page](https://t2v-compbench.github.io/)]\n\n[arxiv 2024.07]Fr\\'echet Video Motion Distance: A Metric for Evaluating Motion Consistency in Videos [[PDF](https://arxiv.org/abs/2407.16124)]\n\n[arxiv 2024.10] The Dawn of Video Generation: Preliminary Explorations with SORA-like Models [[PDF](https://arxiv.org/abs/2410.05227),[Page](https://ailab-cvc.github.io/VideoGen-Eval/)]\n\n[arxiv 2024.10] Beyond FVD: Enhanced Evaluation Metrics for Video Generation Quality[[PDF](https://arxiv.org/abs/2410.05203),[Page](https://oooolga.github.io/JEDi.github.io/)]\n\n[arxiv 2024.11] ViBe: A Text-to-Video Benchmark for Evaluating Hallucination in Large Multimodal Models [[PDF](https://arxiv.org/abs/2411.10867),[Page](https://vibe-t2v-bench.github.io/)]\n\n[arxiv 2024.11] VBench++: Comprehensive and Versatile Benchmark Suite for Video Generative Models [[PDF](https://arxiv.org/abs/2411.13503),[Page](https://github.com/Vchitect/VBench)]\n\n[arxiv 2024.12]  Is Your World Simulator a Good Story Presenter? A Consecutive Events-Based Benchmark for Future Long Video Generation [[PDF](https://arxiv.org/abs/),[Page](https://ypwang61.github.io/project/StoryEval/)] ![Code](https://img.shields.io/github/stars/ypwang61/StoryEval?style=social&label=Star)\n\n[arxiv 2025.01]  MEt3R: Measuring Multi-View Consistency in Generated Images [[PDF](https://arxiv.org/abs/2501.06336),[Page](https://geometric-rl.mpi-inf.mpg.de/met3r/)] ![Code](https://img.shields.io/github/stars/mohammadasim98/MEt3R?style=social&label=Star)\n\n[arxiv 2025.02]  MJ-VIDEO: Fine-Grained Benchmarking and Rewarding Video Preferences in Video Generation [[PDF](https://arxiv.org/pdf/2502.01719)]\n\n[arxiv 2025.03]  What Are You Doing? A Closer Look at Controllable Human Video Generation [[PDF](https://arxiv.org/pdf/2503.04666),[Page](https://github.com/google-deepmind/wyd-benchmark)] ![Code](https://img.shields.io/github/stars/google-deepmind/wyd-benchmark?style=social&label=Star)\n\n[arxiv 2025.03]  Exploring the Evolution of Physics Cognition in Video Generation: A Survey [[PDF](https://arxiv.org/abs/2503.21765),[Page](https://github.com/minnie-lin/Awesome-Physics-Cognition-based-Video-Generation)] ![Code](https://img.shields.io/github/stars/minnie-lin/Awesome-Physics-Cognition-based-Video-Generation?style=social&label=Star)\n\n[arxiv 2025.03] VBench-2.0: Advancing Video Generation Benchmark Suite for Intrinsic Faithfulness  [[PDF](https://arxiv.org/abs/2503.21755),[Page](https://vchitect.github.io/VBench-2.0-project/)] ![Code](https://img.shields.io/github/stars/Vchitect/VBench?style=social&label=Star)\n\n[arxiv 2025.04]  VideoGen-Eval: Agent-based System for Video Generation Evaluation [[PDF](https://arxiv.org/abs/2503.23452),[Page](https://github.com/AILab-CVC/VideoGen-Eval)] ![Code](https://img.shields.io/github/stars/AILab-CVC/VideoGen-Eval?style=social&label=Star)\n\n[arxiv 2025.04] Video-Bench: Human-Aligned Video Generation Benchmark  [[PDF](https://arxiv.org/abs/2504.04907),[Page](https://github.com/Video-Bench/Video-Bench)] ![Code](https://img.shields.io/github/stars/Video-Bench/Video-Bench?style=social&label=Star)\n\n[arxiv 2025.04] Can You Count to Nine? A Human Evaluation Benchmark for Counting Limits in Modern Text-to-Video Models  [[PDF](https://arxiv.org/abs/2504.04051)]\n\n[arxiv 2025.04] VEU-Bench: Towards Comprehensive Understanding of Video Editing [[PDF](https://arxiv.org/abs/your-paper-id),[Page](https://labazh.github.io/VEU-Bench.github.io/)] \n\n[arxiv 2025.05]  Direct Motion Models for Assessing Generated Videos [[PDF](https://arxiv.org/abs/2505.00209),[Page](https://trajan-paper.github.io/)] \n\n[arxiv 2025.06]  ShotBench: Expert-Level Cinematic Understanding in Vision-Language Models [[PDF](https://arxiv.org/abs/2506.21356),[Page](https://vchitect.github.io/ShotBench-project/)] ![Code](https://img.shields.io/github/stars/Vchitect/ShotBench?style=social&label=Star)\n\n[arxiv 2025.10]  Stable Cinemetrics : Structured Taxonomy and Evaluation for Professional Video Generation [[PDF](https://arxiv.org/abs/2509.26555),[Page](https://stable-cinemetrics.github.io/)] \n\n[arxiv 2025.10]  IVEBench: Modern Benchmark Suite for Instruction-Guided Video Editing Assessment [[PDF](https://arxiv.org/abs/2510.11647),[Page](https://ryanchenyn.github.io/projects/IVEBench/)] ![Code](https://img.shields.io/github/stars/RyanChenYN/IVEBench?style=social&label=Star)\n\n[arxiv 2025.10] Rethinking Visual Intelligence: Insights from Video Pretraining  [[PDF](https://arxiv.org/abs/2510.24448)]\n\n[arxiv 2025.10] Are Video Models Ready as Zero-Shot Reasoners? An Empirical Study with the MME-CoF Benchmark  [[PDF](https://arxiv.org/pdf/2510.26802),[Page](https://video-cof.github.io/)] ![Code](https://img.shields.io/github/stars/ZiyuGuo99/MME-CoF?style=social&label=Star)\n\n[arxiv 2025.10] LoCoT2V-Bench: A Benchmark for Long-Form and Complex Text-to-Video Generation  [[PDF](https://arxiv.org/pdf/2510.26412)]\n\n[arxiv 2025.11] Thinking with Video: Video Generation as a Promising Multimodal Reasoning Paradigm  [[PDF](https://arxiv.org/abs/2511.04570),[Page](https://thinking-with-video.github.io/)] ![Code](https://img.shields.io/github/stars/tongjingqi/Thinking-with-Video?style=social&label=Star)\n\n[arxiv 2025.11]  VR-Bench: The First Evaluation of Video Models' Reasoning Abilities through Maze-Solving Tasks [[PDF](https://arxiv.org/abs/2511.15065),[Page](https://imyangc7.github.io/VRBench_Web/)] ![Code](https://img.shields.io/github/stars/ImYangC7/VR-Bench?style=social&label=Star)\n\n[arxiv 2025.11]  V-ReasonBench: Toward Unified Reasoning Benchmark Suite for Video Generation Models [[PDF](https://arxiv.org/abs/2511.16668),[Page](https://oahzxl.github.io/VReasonBench/)] ![Code](https://img.shields.io/github/stars/yangluo7/V-ReasonBench?style=social&label=Star)\n\n[arxiv 2025.12] SVBench: Evaluation of Video Generation Models on Social Reasoning [[PDF](https://arxiv.org/abs/2512.21507),[Page](https://github.com/Gloria2tt/SVBench-Evaluation)] ![Code](https://img.shields.io/github/stars/Gloria2tt/SVBench-Evaluation?style=social&label=Star)\n\n[arxiv 2025.12] VIPER: Process-aware Evaluation for Generative Video Reasoning  [[PDF](https://arxiv.org/pdf/2512.24952)]\n\n[arxiv 2026.01] Are Video Generation Models Geographically Fair? An Attraction-Centric Evaluation of Global Visual Knowledge  [[PDF](https://arxiv.org/pdf/2601.18698)]\n\n[arxiv 2026.01]  Omni-Judge: Can Omni-LLMs Serve as Human-Aligned Judges for Text-Conditioned Audio-Video Generation? [[PDF](https://arxiv.org/pdf/2602.01623),[Page](https://liangsusan-git.github.io/project/omni_judge/)] \n\n[arxiv 2026.03] MSVBench: Towards Human-Level Evaluation of Multi-Shot Video Generation  [[PDF](https://arxiv.org/pdf/2602.23969)】\n\n[arxiv 2026.03] Physion-Eval: Evaluating Physical Realism in Generated Video via Human Reasoning [[[PDF](https://arxiv.org/abs/2603.19607)]]\n\n[arxiv 2026.03] Omni-WorldBench: Towards a Comprehensive Interaction-Centric Evaluation for World Models  [[PDF](https://arxiv.org/abs/2603.22212)]\n\n[arxiv 2026.03] EC-Bench: Enumeration and Counting Benchmark for Ultra-Long Videos  [[PDF](https://arxiv.org/abs/2603.29943),[Page](https://github.com/matsuolab/EC-Bench)] ![Code](https://img.shields.io/github/stars/matsuolab/EC-Bench?style=social&label=Star)\n\n[arxiv 2026.03] SLVMEval: Synthetic Meta Evaluation Benchmark for Text-to-Long Video Generation  [[PDF](https://arxiv.org/abs/2603.29186)]\n\n[arxiv 2026.03]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)\n\n\n\n## Survey\n[arxiv 2023.03]A Survey on Video Diffusion Models [[PDF](https://arxiv.org/abs/2310.10647)]\n\n[arxiv 2024.05]Video Diffusion Models: A Survey [[PDF](https://arxiv.org/abs/2405.03150)]\n\n[arxiv 2024.07]Diffusion Model-Based Video Editing: A Survey [[PDF](https://arxiv.org/abs/2407.07111),[Page](https://github.com/wenhao728/awesome-diffusion-v2v)]\n\n[ResearchGate 2024.07]Conditional Video Generation Guided by Multimodal Inputs: A Comprehensive Survey [[PDF](https://www.researchgate.net/publication/382443305_Conditional_Video_Generation_Guided_by_Multimodal_Inputs_A_Comprehensive_Survey)]\n\n[arxiv 2025.04]  Survey of Video Diffusion Models: Foundations, Implementations, and Applications [[PDF](https://arxiv.org/abs/2504.16081)]\n\n## Edge device\n[arxiv 2026.01] SnapGen++: Unleashing Diffusion Transformers for Efficient High-Fidelity Image Generation on Edge Devices  [[PDF](https://arxiv.org/pdf/2601.08303)]\n\n[arxiv 2026.01]  S2DiT: Sandwich Diffusion Transformer for Mobile Streaming Video Generation [[PDF](https://arxiv.org/pdf/2601.12719)]\n\n[arxiv 2026.01] NanoFLUX: Distillation-Driven Compression of Large Text-to-Image Generation Models for Mobile Devices  [[PDF](https://arxiv.org/abs/2602.06879)]\n\n[arxiv 2026.03]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)\n\n\n## Speed \n[arxiv 2023.12]F3-Pruning: A Training-Free and Generalized Pruning Strategy towards Faster and Finer Text-to-Video Synthesis [[PDF](https://arxiv.org/abs/2312.03459)]\n\n[arxiv 2023.12]VideoLCM: Video Latent Consistency Model [[PDF](https://arxiv.org/abs/2312.09109)]\n\n[arxiv 2024.01]FlashVideo: A Framework for Swift Inference in Text-to-Video Generation [[PDF](https://arxiv.org/abs/2401.00869)]\n\n[arxiv 2024.01]AnimateLCM: Accelerating the Animation of Personalized Diffusion Models and Adapters with Decoupled Consistency Learning [[PDF](https://arxiv.org/abs/2402.00769),[Page](https://animatelcm.github.io/)]\n\n[arxiv 2024.03]AnimateDiff-Lightning: Cross-Model Diffusion Distillation [[PDF](https://arxiv.org/abs/2403.12706)]\n\n[arxiv 2024.05] T2V-Turbo: Breaking the Quality Bottleneck of Video Consistency Model with Mixed Reward Feedback[[PDF](https://arxiv.org/abs/2405.18750),[Page](https://t2v-turbo.github.io/)]\n\n[arxiv 2024.05] PCM : Phased Consistency Model[[PDF](https://arxiv.org/abs/2405.18407),[Page](https://g-u-n.github.io/projects/pcm/)]\n\n[arxiv 2024.06]SF-V: Single Forward Video Generation Model [[PDF](https://arxiv.org/abs/2406.04324),[Page](https://snap-research.github.io/SF-V/)]\n\n[arxiv 2024.06] Motion Consistency Model: Accelerating Video Diffusion with Disentangled Motion-Appearance Distillation [[PDF](https://arxiv.org/abs/2406.06890), [Page](https://yhzhai.github.io/mcm/)]\n\n[arxiv 2024.07]QVD: Post-training Quantization for Video Diffusion Models [[PDF](https://arxiv.org/abs/2407.11585),[Page]()]\n\n[arxiv 2024.08]Real-Time Video Generation with Pyramid Attention Broadcast [[PDF](https://arxiv.org/abs/2408.12588),[Page](https://github.com/NUS-HPC-AI-Lab/VideoSys)]\n\n[arxiv 2024.11] Adaptive Caching for Faster Video Generation with Diffusion Transformers [[PDF](https://arxiv.org/abs/2411.02397),[Page](https://adacache-dit.github.io/)]\n\n[arxiv 2024.11] Fast and Memory-Efficient Video Diffusion Using Streamlined Inference [[PDF](https://arxiv.org/abs/2411.01171)]\n\n[arxiv 2024.11] Accelerating Vision Diffusion Transformers with Skip Branches  [[PDF](https://arxiv.org/abs/2411.17616),[Page](https://github.com/OpenSparseLLMs/Skip-DiT)] ![Code](https://img.shields.io/github/stars/OpenSparseLLMs/Skip-DiT?style=social&label=Star)\n\n[arxiv 2024.12] Timestep Embedding Tells: It's Time to Cache for Video Diffusion Model  [[PDF](https://arxiv.org/abs/2411.19108),[Page](https://liewfeng.github.io/TeaCache/)] ![Code](https://img.shields.io/github/stars/LiewFeng/TeaCache?style=social&label=Star)\n\n[arxiv 2024.12] Individual Content and Motion Dynamics Preserved Pruning for Video Diffusion Models  [[PDF](https://arxiv.org/abs/2411.18350),] \n\n[arxiv 2024.12] Accelerating Video Diffusion Models via Distribution Matching  [[PDF](https://arxiv.org/abs/2412.05899)]\n\n[arxiv 2024.12] From Slow Bidirectional to Fast Causal Video Generators  [[PDF](https://arxiv.org/abs/2412.07772),[Page](https://causvid.github.io/)] \n\n[arxiv 2024.12]  Mobile Video Diffusion [[PDF](https://arxiv.org/abs/2412.07583)]\n\n[arxiv 2024.12]  AsymRnR: Video Diffusion Transformers Acceleration with Asymmetric Reduction and Restoration [[PDF](https://arxiv.org/abs/2412.11706)]\n\n[arxiv 2024.12]  SnapGen-V: Generating a Five-Second Video within Five Seconds on a Mobile Device [[PDF](https://arxiv.org/abs/2412.10494),[Page](https://snap-research.github.io/snapgen-v/)] \n\n[arxiv 2025.01] Diffusion Adversarial Post-Training for One-Step Video Generation  [[PDF](https://arxiv.org/abs/2501.08316)]\n\n[arxiv 2025.02]  Fast Video Generation with SLIDING TILE ATTENTION [[PDF](https://arxiv.org/pdf/2502.04507)]\n\n[arxiv 2025.02]  Magic 1-For-1: Generating One Minute Video Clips within One Minute [[PDF](https://arxiv.org/abs/2502.07701),[Page](https://magic-141.github.io/Magic-141/)] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)\n\n\n[arxiv 2025.02]  Hardware-Friendly Static Quantization Method for Video Diffusion Transformers [[PDF](https://arxiv.org/pdf/2502.15077)]\n\n[arxiv 2025.03] W2SVD: Weak-to-Strong Video Distillation for Large-Scale Portrait Few-Step Synthesis  [[PDF](https://arxiv.org/abs/2503.13319),[Page](https://w2svd.github.io/W2SVD/)]\n\n[arxiv 2025.04]  On-device Sora: Enabling Training-Free Diffusion-based Text-to-Video Generation for Mobile Devices [[PDF](https://arxiv.org/abs/2503.23796),[Page](https://github.com/eai-lab/On-device-Sora)] ![Code](https://img.shields.io/github/stars/eai-lab/On-device-Sora?style=social&label=Star)\n\n[arxiv 2025.05] Training-Free Efficient Video Generation via Dynamic Token Carving  [[PDF](https://arxiv.org/abs/2505.16864),[Page](https://julianjuaner.github.io/projects/jenga/)] ![Code](https://img.shields.io/github/stars/dvlab-research/Jenga/?style=social&label=Star)\n\n[arxiv 2025.05]  REPA Works Until It Doesn't: Early-Stopped, Holistic Alignment Supercharges Diffusion Training [[PDF](https://arxiv.org/abs/2505.16792),[Page](https://github.com/NUS-HPC-AI-Lab/HASTE)] ![Code](https://img.shields.io/github/stars/NUS-HPC-AI-Lab/HASTE?style=social&label=Star)\n\n[arxiv 2025.05]  DraftAttention: Fast Video Diffusion via Low-Resolution Attention Guidance\n [[PDF](https://arxiv.org/abs/2505.14708),[Page](https://github.com/shawnricecake/draft-attention)] ![Code](https://img.shields.io/github/stars/shawnricecake/draft-attention?style=social&label=Star)\n\n[arxiv 2025.05] Faster Video Diffusion with Trainable Sparse Attention  [[PDF](https://arxiv.org/pdf/2505.13389)]\n\n[arxiv 2025.05]  QVGen: Pushing the Limit of Quantized Video Generative Models [[PDF](https://arxiv.org/abs/2505.11497)]\n\n[arxiv 2025.06]  DCM: Dual-Expert Consistency Model for Efficient and High-Quality Video Generation [[PDF](https://arxiv.org/abs/2506.03123),[Page](https://github.com/Vchitect/DCM)] ![Code](https://img.shields.io/github/stars/Vchitect/DCM?style=social&label=Star)\n\n[arxiv 2025.06] Sparse-vDiT: Unleashing the Power of Sparse Attention to Accelerate Video Diffusion Transformers  [[PDF](https://arxiv.org/abs/2506.03065),[Page](https://github.com/Peyton-Chen/Sparse-vDiT)] ![Code](https://img.shields.io/github/stars/Peyton-Chen/Sparse-vDiT?style=social&label=Star)\n\n[arxiv 2025.06]  FPSAttention: Training-Aware FP8 and Sparsity Co-Design for Fast Video Diffusion [[PDF](https://arxiv.org/abs/2506.04648),[Page](https://fps.ziplab.co/)] \n\n[arxiv 2025.06] MagCache: Fast Video Generation with Magnitude-Aware Cache  [[PDF](https://arxiv.org/abs/2506.09045),[Page](https://zehong-ma.github.io/MagCache/)] ![Code](https://img.shields.io/github/stars/Zehong-Ma/MagCache?style=social&label=Star)\n\n[arxiv 2025.06]  Autoregressive Adversarial Post-Training for Real-Time Interactive Video Generation [[PDF](https://arxiv.org/abs/2506.09350),[Page](https://seaweed-apt.com/2)] \n\n[arxiv 2025.07]  VMoBA: Mixture-of-Block Attention for Video Diffusion Models [[PDF](https://arxiv.org/pdf/2506.23858),[Page](https://github.com/KwaiVGI/VMoBA)] ![Code](https://img.shields.io/github/stars/KwaiVGI/VMoBA?style=social&label=Star)\n\n[arxiv 2025.07]  Less is Enough: Training-Free Video Diffusion Acceleration via Runtime-Adaptive Caching [[PDF](https://arxiv.org/abs/2507.02860),[Page](https://github.com/H-EmbodVis/EasyCache)] ![Code](https://img.shields.io/github/stars/H-EmbodVis/EasyCache?style=social&label=Star)\n\n[arxiv 2025.07] StreamDiT: Real-Time Streaming Text-to-Video Generation  [[PDF](https://arxiv.org/abs/2507.03745),[Page](https://cumulo-autumn.github.io/StreamDiT/)] \n\n[arxiv 2025.07] Taming Diffusion Transformer for Real-Time Mobile Video Generation  [[PDF](https://arxiv.org/abs/2507.13343),[Page](https://snap-research.github.io/mobile_video_dit/)] \n\n[arxiv 2025.08]  HierarchicalPrune: Position-Aware Compression for Large-Scale Diffusion Models [[PDF](https://arxiv.org/abs/2508.04663),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)\n\n[arxiv 2025.08] SwiftVideo: A Unified Framework for Few-Step Video Generation through Trajectory-Distribution Alignment  [[PDF](https://arxiv.org/abs/2508.06082)]\n\n[arxiv 2025.08] TaoCache: Structure-Maintained Video Generation Acceleration  [[PDF](https://arxiv.org/pdf/2508.08978)]\n\n[arxiv 2025.08]  Compact Attention: Exploiting Structured Spatio-Temporal Sparsity for Fast Video Generation [[PDF](https://arxiv.org/abs/2508.12969),[Page](https://yo-ava.github.io/Compact-Attention.github.io/)]\n\n[arxiv 2025.08] MixCache: Mixture-of-Cache for Video Diffusion Transformer Acceleration  [[PDF](https://arxiv.org/pdf/2508.12691)]\n\n[arxiv 2025.08] POSE: Phased One-Step Adversarial Equilibrium for Video Diffusion Models  [[PDF](https://arxiv.org/abs/2508.21019),[Page](https://pose-paper.github.io/)]\n\n[arxiv 2025.10] LightCache: Memory-Efficient, Training-Free Acceleration for Video Generation  [[PDF](https://arxiv.org/abs/2510.05367),[Page](https://github.com/NKUShaw/LightCache)] ![Code](https://img.shields.io/github/stars/NKUShaw/LightCache?style=social&label=Star)\n\n[arxiv 2025.10]  LinVideo: A Post-Training Framework towards O(n) Attention in Efficient Video Generation [[PDF](https://arxiv.org/abs/2510.08318)]\n\n[arxiv 2025.10] rCM: Score-Regularized Continuous-Time Consistency Model  [[PDF](https://arxiv.org/abs/2510.08431),[Page](https://research.nvidia.com/labs/dir/rcm/)] ![Code](https://img.shields.io/github/stars/NVlabs/rcm?style=social&label=Star)\n\n[arxiv 2025.11]  Towards One-Step Causal Video Generation via Adversarial Self-Distillation [[PDF](https://arxiv.org/abs/2511.01419),[Page](https://github.com/BigAandSmallq/SAD)] ![Code](https://img.shields.io/github/stars/BigAandSmallq/SAD?style=social&label=Star)\n\n[arxiv 2025.11] MotionStream: Real-Time Video Generation with Interactive Motion Controls  [[PDF](https://arxiv.org/abs/2511.01266),[Page](https://joonghyuk.com/motionstream-web/)] ![Code](https://img.shields.io/github/stars/alex4727/motionstream?style=social&label=Star)\n\n[arxiv 2025.11] StreamDiffusionV2: A Streaming System for Dynamic and Interactive Video Generation  [[PDF](https://arxiv.org/abs/2511.07399),[Page](http://streamdiffusionv2.github.io/)] ![Code](https://img.shields.io/github/stars/chenfengxu714/StreamDiffusionV2?style=social&label=Star)\n\n[arxiv 2025.11]  PipeDiT: Accelerating Diffusion Transformers in Video Generation with Task Pipelining and Model Decoupling [[PDF](https://arxiv.org/abs/2511.12056)]\n\n[arxiv 2025.12]  MobileI2V: Fast and High-Resolution Image-to-Video on Mobile Devices [[PDF](https://arxiv.org/abs/2511.21475),[Page](https://github.com/hustvl/MobileI2V)] ![Code](https://img.shields.io/github/stars/hustvl/MobileI2V?style=social&label=Star)\n\n[arxiv 2025.12] PSA: Pyramid Sparse Attention for Efficient Video Understanding andGeneration  [[PDF](https://arxiv.org/pdf/2512.04025),[Page](https://ziplab.co/PSA/)] ![Code](https://img.shields.io/github/stars/ziplab/Pyramid-Sparse-Attention?style=social&label=Star)\n\n[arxiv 2025.12] Few-Step Distillation for Text-to-Image Generation: A Practical Guide  [[PDF](https://arxiv.org/abs/2512.13006),[Page](https://arxiv.org/abs/2512.13006v1)] ![Code](https://img.shields.io/github/stars/alibaba-damo-academy/T2I-Distill?style=social&label=Star)\n\n[arxiv 2025.12] TurboDiffusion: Accelerating Video Diffusion Models by 100–200 Times  [[PDF](https://arxiv.org/abs/2512.16093),[Page](https://github.com/thu-ml/TurboDiffusion)] ![Code](https://img.shields.io/github/stars/thu-ml/TurboDiffusion?style=social&label=Star)\n\n[arxiv 2026.01] PackCache: A Training-Free Acceleration Method for Unified Autoregressive Video Generation via Compact KV-Cache  [[PDF](https://arxiv.org/abs/2601.04359)]\n\n[arxiv 2026.01] MHLA: Restoring Expressivity of Linear Attention via Token-Level Multi-Head  [[PDF](https://arxiv.org/abs/2601.07832),[Page](https://github.com/DAGroup-PKU/MHLA)] ![Code](https://img.shields.io/github/stars/DAGroup-PKU/MHLA?style=social&label=Star)\n\n[arxiv 2026.01]  Transition Matching Distillation for Fast Video Generation [[PDF](https://arxiv.org/abs/2601.09881),[Page](https://research.nvidia.com/labs/genair/tmd/)] \n\n[arxiv 2026.01]  Efficient Autoregressive Video Diffusion with Dummy Head [[PDF](https://arxiv.org/abs/2601.20499),[Page](https://csguoh.github.io/project/DummyForcing/)] ![Code](https://img.shields.io/github/stars/csguoh/DummyForcing?style=social&label=Star)\n\n[arxiv 2026.01] VMonarch: Efficient Video Diffusion Transformers with Structured Attention  [[PDF](https://arxiv.org/pdf/2601.22275)]\n\n[arxiv 2026.01]  FSVideo: Fast Speed Video Diffusion Model in a Highly-Compressed Latent Space [[PDF](https://arxiv.org/abs/2602.02092),[Page](https://kingofprank.github.io/fsvideo/)]\n\n[arxiv 2026.01] Fast Autoregressive Video Diffusion and World Models with Temporal Cache Compression and Sparse Attention  [[PDF](https://arxiv.org/abs/2602.01801),[Page](https://dvirsamuel.github.io/fast-auto-regressive-video/)] \n\n[arxiv 2026.01] Light Forcing: Accelerating Autoregressive Video Diffusion via Sparse Attention  [[PDF](https://arxiv.org/abs/2602.04789),[Page](https://github.com/chengtao-lv/LightForcing)] ![Code](https://img.shields.io/github/stars/chengtao-lv/LightForcing?style=social&label=Star)\n\n[arxiv 2026.01] FlashBlock: Attention Caching for Efficient Long-Context Block Diffusion  [[PDF](),[Page](https://caesarhhh.github.io/FlashBlock/)]\n\n[arxiv 2026.02] DDiT: Dynamic Patch Scheduling for Efficient Diffusion Transformers  [[PDF](https://arxiv.org/pdf/2602.16968)]\n\n[arxiv 2026.02]  Predict to Skip: Linear Multistep Feature Forecasting for Efficient Diffusion Transformers [[PDF](https://arxiv.org/abs/2602.18093)]\n\n[arxiv 2026.02] SeaCache: Spectral-Evolution-Aware Cache for Accelerating Diffusion Models  [[PDF](https://arxiv.org/abs/2602.18993),[Page](https://jiwoogit.github.io/SeaCache/)] ![Code](https://img.shields.io/github/stars/jiwoogit/SeaCache?style=social&label=Star)\n\n[arxiv 2026.03] FastLightGen: Fast and Light Video Generation with Fewer Steps and Parameters  [[PDF](https://arxiv.org/pdf/2603.01685)]\n\n[arxiv 2026.03] PreciseCache: Precise Feature Caching for Efficient and High-fidelity Video Generation  [[PDF](https://arxiv.org/abs/2603.00976)]\n\n[arxiv 2026.03] Accelerating Text-to-Video Generation with Calibrated Sparse Attention  [[PDF](https://arxiv.org/abs/2603.05503)]\n\n[arxiv 2026.03] FastSTAR: Spatiotemporal Token Pruning for Efficient Autoregressive Video Synthesis  [[PDF](https://arxiv.org/abs/2603.07192)]\n\n[arxiv 2026.03]  FrameDiT: Diffusion Transformer with Frame-Level Matrix Attention for Efficient Video Generation [[PDF](https://arxiv.org/pdf/2603.09721)]\n\n[arxiv 2026.03] SVG-EAR: Parameter-Free Linear Compensation for Sparse Video Generation via Error-aware Routing  [[PDF](https://arxiv.org/pdf/2603.08982)]\n\n[arxiv 2026.03] LatSearch: Latent Reward-Guided Search for Faster Inference-Time Scaling in Video Diffusion  [[PDF](https://arxiv.org/abs/2603.14526),[Page](https://zengqunzhao.github.io/LatSearch)] ![Code](https://img.shields.io/github/stars/zengqunzhao/LatSearch?style=social&label=Star)\n\n[arxiv 2026.03] 6Bit-Diffusion: Inference-Time Mixed-Precision Quantization for Video Diffusion Models  [[PDF](https://arxiv.org/abs/2603.18742)]\n\n[arxiv 2026.03] Training-Free Sparse Attention for Fast Video Generation via Offline Layer-Wise Sparsity Profiling and Online Bidirectional Co-Clustering  [[PDF](https://arxiv.org/abs/2603.18636)]\n\n[arxiv 2026.03] InverFill: One-Step Inversion for Enhanced Few-Step Diffusion Inpainting  [[PDF](https://arxiv.org/abs/2603.23463)]\n\n[arxiv 2026.03] Adaptive Video Distillation: Mitigating Oversaturation and Temporal Collapse in Few-Step Generation  [[PDF](https://arxiv.org/abs/2603.21864),[Page](https://adaptive-video-distillation.github.io/)] ![Code](https://img.shields.io/github/stars/yuyangyou/Adaptive-Video-Distillation?style=social&label=Star)\n\n\n[arxiv 2026.03]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)\n\n\n## Dataset optimization \n[arxiv 2025.01] A Large-Scale Study on Video Action Dataset Condensation  [[PDF](https://arxiv.org/abs/2412.21197),[Page](https://github.com/MCG-NJU/Video-DC)] ![Code](https://img.shields.io/github/stars/MCG-NJU/Video-DC?style=social&label=Star)\n\n\n\n## Others \n[arxiv 2023.05]AADiff: Audio-Aligned Video Synthesis with Text-to-Image Diffusion [[PDF](https://arxiv.org/abs/2305.04001)]\n\n[arxiv 2023.05]Multi-object Video Generation from Single Frame Layouts [[PDF](https://arxiv.org/abs/2305.03983)]\n\n[arxiv 2023.06]Learn the Force We Can: Multi-Object Video Generation from Pixel-Level Interactions [[PDF](https://arxiv.org/abs/2306.03988)]\n\n[arxiv 2023.08]DiffSynth: Latent In-Iteration Deflickering for Realistic Video Synthesis [[PDF](https://arxiv.org/abs/2308.03463)]\n\n\n## CG2real\n[arxiv 2024.09] AMG: Avatar Motion Guided Video Generation [[PDF](https://arxiv.org/abs/2409.01502),[Page](https://github.com/zshyang/amg)]\n\n[arxiv 2024.09] Compositional 3D-aware Video Generation with LLM Director [[PDF](https://arxiv.org/abs/2409.00558),[Page](https://www.microsoft.com/en-us/research/project/compositional-3d-aware-video-generation/)]\n\n[arxiv 2024.10] SceneCraft: Layout-Guided 3D Scene Generation [[PDF](https://arxiv.org/abs/2410.09049),[Page](https://orangesodahub.github.io/SceneCraft)]\n\n[arxiv 2024.10] Tex4D: Zero-shot 4D Scene Texturing with Video Diffusion Models [[PDF](https://arxiv.org/abs/2410.10821),[Page](https://tex4d.github.io/)]\n\n[arxiv 2024.10] Diffusion Curriculum: Synthetic-to-Real Generative Curriculum Learning via Image-Guided Diffusion [[PDF](https://arxiv.org/abs/2410.13674),[Page](https://github.com/tianyi-lab/DisCL)]\n\n[arxiv 2024.10] FashionR2R: Texture-preserving Rendered-to-Real Image Translation with Diffusion Models [[PDF](https://arxiv.org/pdf/2410.14429),[Page](https://rickhh.github.io/FashionR2R/)]\n\n[arxiv 2026.01]  Sim2real Image Translation Enables Viewpoint-Robust Policies from Fixed-Camera Datasets [[PDF](https://arxiv.org/abs/2601.09605)]\n\n[arxiv 2026.03] RealMaster: Lifting Rendered Scenes into Photorealistic Video  [[PDF](https://arxiv.org/abs/2603.23462),[Page](https://danacohen95.github.io/RealMaster/)] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)\n\n\n\n[arxiv 2026.03]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)\n\n\n## world model & interactive generation\n[arxiv 2024.06] AVID: Adapting Video Diffusion Models to World Models [[PDF](),[Page](https://sites.google.com/view/avid-world-model-adapters/home)]\n\n[arxiv 2024.08]Diffusion Models Are Real-Time Game Engines [[PDF](https://arxiv.org/abs/2408.14837),[Page](https://gamengen.github.io/)]\n\n[arxiv 2024.08] Body of Her: A Preliminary Study on End-to-End Humanoid Agent  [[PDF](https://arxiv.org/pdf/2408.02879)] \n\n[arxiv 2024.09] Video Game Generation: A Practical Study using Mario [[PDF](https://virtual-protocol.github.io/mario-videogamegen/static/pdfs/VideoGameGen.pdf),[Page](https://virtual-protocol.github.io/mario-videogamegen/)]\n\n\n[arxiv 2024.10] WorldSimBench: Towards Video Generation Models as World Simulators [[PDF](https://arxiv.org/abs/2410.18072),[Page](https://iranqin.github.io/WorldSimBench.github.io/)]\n\n[arxiv 2024.10] Agent-to-Sim: Learning Interactive Behavior Models from Casual Longitudinal Videos [[PDF](https://arxiv.org/abs/2410.16259),[Page](https://gengshan-y.github.io/agent2sim-www/)]\n\n[arxiv 2024.10] SlowFast-VGen: Slow-Fast Learning for Action-Driven Long Video Generation [[PDF](https://arxiv.org/abs/2410.23277),[Page](https://slowfast-vgen.github.io/)]\n\n[arxiv 2024.10] ADAM: An Embodied Causal Agent in Open-World Environments [[PDF](https://arxiv.org/abs/2410.22194),[Page](https://opencausalab.github.io/ADAM/)]\n\n[arxiv 2024.11] How Far is Video Generation from World Model: A Physical Law Perspective [[PDF](https://arxiv.org/abs/2411.02385),[Page](https://phyworld.github.io/)]\n\n[arxiv 2024.11] Oasis: an interactive, explorable world model [[PDF](https://oasis-model.github.io/),[Page](https://www.etched.com/blog-posts/oasis)]\n\n[arxiv 2024.11] GameGen-X: Interactive Open-world Game Video Generation [[PDF](https://arxiv.org/abs/2411.00769),[Page](https://gamegen-x.github.io/)]\n\n[arxiv 2024.11] Generative World Explorer [[PDF](https://arxiv.org/abs/2411.11844),[Page](http://generative-world-explorer.github.io/)]\n\n[arxiv 2024.11] The Matrix： Infinite-Horizon World Generation with Real-Time Interaction [[PDF](https://thematrix1999.github.io/article/the_matrix.pdf),[Page](https://thematrix1999.github.io/)]\n\n[arxiv 2024.12] Navigation World Models  [[PDF](https://arxiv.org/abs/2412.03572),[Page](https://www.amirbar.net/nwm/)]\n\n[arxiv 2024.12] GenEx: Generating an Explorable World  [[PDF](https://arxiv.org/abs/2412.09624),[Page](http://genex.world/)] \n\n[arxiv 2025.01] GameFactory: Creating New Games with Generative Interactive Videos  [[PDF](KwaiVGI/GameFactory),[Page](https://vvictoryuki.github.io/gamefactory/)] ![Code](https://img.shields.io/github/stars/KwaiVGI/GameFactory?style=social&label=Star)\n\n[arxiv 2025.02] Pre-Trained Video Generative Models as World Simulators  [[PDF](https://arxiv.org/pdf/2502.07825)]\n\n[arxiv 2025.03] Position: Interactive Generative Video as Next-Generation Game Engine  [[PDF](https://arxiv.org/abs/2503.17359)]\n\n[arxiv 2025.04]  Can Test-Time Scaling Improve World Foundation Model? [[PDF](https://arxiv.org/abs/2503.24320),[Page](https://github.com/Mia-Cong/SWIFT)] ![Code](https://img.shields.io/github/stars/Mia-Cong/SWIFT?style=social&label=Star)\n\n[arxiv 2025.04] WorldScore: A Unified Evaluation Benchmark for World Generation  [[PDF](https://arxiv.org/abs/2504.00983),[Page](https://haoyi-duan.github.io/WorldScore/)] ![Code](https://img.shields.io/github/stars/haoyi-duan/WorldScore?style=social&label=Star)\n\n[arxiv 2025.04] MineWorld: a Real-Time and Open-Source Interactive World Model on Minecraft  [[PDF](https://arxiv.org/abs/2504.08388),[Page](https://github.com/microsoft/MineWorld)] ![Code](https://img.shields.io/github/stars/microsoft/MineWorld?style=social&label=Star)\n\n[arxiv 2025.05] A Survey of Interactive Generative Video  [[PDF](https://arxiv.org/pdf/2504.21853)]\n\n[arxiv 2025.05] Vid2World: Crafting Video Diffusion Models to Interactive World Models  [[PDF](https://arxiv.org/abs/2505.14357),[Page](https://knightnemo.github.io/vid2world/)] \n\n[arxiv 2025.06]  Voyager: Long-Range and World-Consistent Video Diffusion for Explorable 3D Scene Generation [[PDF](https://arxiv.org/abs/2506.04225),[Page](https://voyager-world.github.io/)] ![Code](https://img.shields.io/github/stars/Voyager-World/Voyager?style=social&label=Star)\n\n[arxiv 2025.06] V-JEPA 2: Self-Supervised Video Models Enable Understanding, Prediction and Planning  [[PDF](https://arxiv.org/abs/2506.09985),[Page](https://github.com/facebookresearch/vjepa2)] ![Code](https://img.shields.io/github/stars/facebookresearch/vjepa2?style=social&label=Star)\n\n[arxiv 2025.06]  Hunyuan-GameCraft: High-dynamic Interactive Game Video Generation with Hybrid History Condition [[PDF](https://arxiv.org/abs/2506.17201),[Page](https://hunyuan-gamecraft.github.io/)] \n\n[arxiv 2025.06] From Virtual Games to Real-World Play  [[PDF](https://arxiv.org/pdf/2506.18901),[Page](https://wenqsun.github.io/RealPlay/)] ![Code](https://img.shields.io/github/stars/wenqsun/Real-Play?style=social&label=Star)\n\n[arxiv 2025.06]   Matrix-Game: Interactive World Foundation Model[[PDF](https://arxiv.org/pdf/2506.18701),[Page](https://matrix-game-homepage.github.io/)] ![Code](https://img.shields.io/github/stars/SkyworkAI/Matrix-Game?style=social&label=Star)\n\n[arxiv 2025.06] Whole-Body Conditioned Egocentric Video Prediction [[PDF](http://arxiv.org/abs/2506.21552),[Page](https://dannytran123.github.io/PEVA/)] \n\n[arxiv 2025.07] Critiques of World Models  [[PDF](https://arxiv.org/pdf/2507.05169)]\n\n[arxiv 2025.07]  MindJourney: Test-Time Scaling with World Models for Spatial Reasoning [[PDF](https://arxiv.org/abs/2507.12508),[Page](https://umass-embodied-agi.github.io/MindJourney/)] ![Code](https://img.shields.io/github/stars/UMass-Embodied-AGI/MindJourney?style=social&label=Star)\n\n[arxiv 2025.08]  Matrix-3D: Omnidirectional Explorable 3D World Generation [[PDF](https://arxiv.org/abs/2508.08086),[Page](https://matrix-3d.github.io/)] ![Code](https://img.shields.io/github/stars/SkyworkAI/Matrix-3D?style=social&label=Star)\n\n[arxiv 2025.08]  Yan: Foundational Interactive Video Generation [[PDF](https://arxiv.org/pdf/2508.08601),[Page](https://greatx3.github.io/Yan/)] \n\n[arxiv 2025.08] Matrix-Game 2.0: An Open-Source, Real-Time, and Streaming Interactive World Model  [[PDF](https://arxiv.org/abs/2508.13009),[Page](https://matrix-game-v2.github.io/)] ![Code](https://img.shields.io/github/stars/SkyworkAI/Matrix-Game?style=social&label=Star)\n\n[arxiv 2025.10]  VideoVerse: How Far is Your T2V Generator from a World Model? [[PDF](https://arxiv.org/abs/2510.08398)]\n\n[arxiv 2025.10] World-in-World: World Models in a Closed-Loop World  [[PDF](https://arxiv.org/pdf/2510.18135),[Page](https://world-in-world.github.io/)] ![Code](https://img.shields.io/github/stars/World-In-World/world-in-world?style=social&label=Star)\n\n[arxiv 2025.11]  World Simulation with Video Foundation Models for Physical AI [[PDF](https://arxiv.org/abs/2511.00062),[Page](https://github.com/nvidia-cosmos/cosmos-predict2.5)] ![Code](https://img.shields.io/github/stars/nvidia-cosmos/cosmos-predict2.5?style=social&label=Star)\n\n[arxiv 2025.11] PAN: A World Model for General, Interactable, and Long-Horizon World Simulation  [[PDF](https://arxiv.org/abs/2511.09057)]\n\n[arxiv 2025.11] MagicWorld: Interactive Geometry-driven Video World Exploration  [[PDF](https://arxiv.org/abs/2511.18886),[Page](https://vivocameraresearch.github.io/magicworld/)] \n\n[arxiv 2025.12]  Hunyuan-GameCraft-2: Instruction-following Interactive Game World Model [[PDF](https://arxiv.org/abs/2511.23429),[Page](https://hunyuan-gamecraft-2.github.io/)] \n\n[arxiv 2025.12] RELIC: Interactive Video World Model with Long-Horizon Memory  [[PDF](https://arxiv.org/abs/2512.04040),[Page](https://relic-worldmodel.github.io/)] \n\n[arxiv 2025.12] LongVie 2: Multimodal Controllable Ultra-Long Video World Model  [[PDF](https://arxiv.org/abs/2512.13604),[Page](https://vchitect.github.io/LongVie2-project/)] ![Code](https://img.shields.io/github/stars/Vchitect/LongVie?style=social&label=Star)\n\n[arxiv 2025.12]  WorldPlay: Towards Long-Term Geometric Consistency for Real-Time Interactive World Modeling [[PDF](https://arxiv.org/abs/2512.14614),[Page](https://3d-models.hunyuan.tencent.com/world/)] ![Code](https://img.shields.io/github/stars/Tencent-Hunyuan/HY-WorldPlay?style=social&label=Star)\n\n[arxiv 2025.12]  Spatia: Video Generation with Updatable Spatial Memory [[PDF](https://arxiv.org/abs/2512.15716),[Page](https://zhaojingjing713.github.io/Spatia/)] ![Code](https://img.shields.io/github/stars/ZhaoJingjing713/Spatia?style=social&label=Star)\n\n[arxiv 2025.12] Yume-1.5: A Text-Controlled Interactive World Generation Model  [[PDF](https://arxiv.org/abs/2512.22096),[Page](https://stdstu12.github.io/YUME-Project/)] ![Code](https://img.shields.io/github/stars/stdstu12/YUME?style=social&label=Star)\n\n[arxiv 2026.01]  Learning Latent Action World Models In The Wild [[PDF](https://arxiv.org/abs/2601.05230)]\n\n[arxiv 2026.01]  StableWorld: Towards Stable and Consistent Long Interactive Video Generation [[PDF](https://arxiv.org/pdf/2601.15281),[Page](https://sd-world.github.io/)] ![Code](https://img.shields.io/github/stars/xbyym/StableWorld?style=social&label=Star)\n\n[arxiv 2026.01]  Advancing Open-source World Models [[PDF](https://arxiv.org/abs/2601.20540),[Page](https://technology.robbyant.com/lingbot-world)] ![Code](https://img.shields.io/github/stars/robbyant/lingbot-world?style=social&label=Star)\n\n[arxiv 2026.01] Infinite-World: Scaling Interactive World Models to 1000-Frame Horizons via Pose-Free Hierarchical Memory  [[PDF](https://arxiv.org/abs/2602.02393),[Page](https://rq-wu.github.io/projects/infinite-world/index.html)] ![Code](https://img.shields.io/github/stars/MeiGen-AI/Infinite-World?style=social&label=Star)\n\n[arxiv 2026.01]  LIVE: Long-horizon Interactive Video World Modeling [[PDF](https://arxiv.org/abs/2602.03747),[Page](https://junchao-cs.github.io/LIVE-demo/)] ![Code](https://img.shields.io/github/stars/Junchao-cs/LIVE?style=social&label=Star)\n\n[arxiv 2026.01] DreamDojo A Generalist Robot World Model from Large-Scale Human Videos  [[PDF](https://arxiv.org/abs/2602.06949),[Page](https://dreamdojo-world.github.io/)] \n\n[arxiv 2026.02] WorldCompass: Reinforcement Learning for Long-Horizon World Models  [[PDF](https://arxiv.org/abs/2602.09022),[Page](https://3d-models.hunyuan.tencent.com/world/)] ![Code](https://img.shields.io/github/stars/Tencent-Hunyuan/HY-WorldCompass?style=social&label=Star)\n\n[arxiv 2026.02] AnchorWeave: World-Consistent Video Generation with Retrieved Local Spatial Memories  [[PDF](https://arxiv.org/abs/2602.14941),[Page](https://zunwang1.github.io/AnchorWeave)] ![Code](https://img.shields.io/github/stars/wz0919/AnchorWeave?style=social&label=Star)\n\n[arxiv 2026.03]  ShareVerse: Multi-Agent Consistent Video Generation for Shared World Modeling [[PDF](https://arxiv.org/pdf/2603.02697)]\n\n[arxiv 2026.03] WorldCache: Accelerating World Models for Free via Heterogeneous Token Caching  [[PDF](https://arxiv.org/abs/2603.06331),[Page](https://github.com/FofGofx/WorldCache)] ![Code](https://img.shields.io/github/stars/FofGofx/WorldCache?style=social&label=Star)\n\n[arxiv 2026.03] WorldCam: Interactive Autoregressive 3D Gaming Worlds with Camera Pose as a Unifying Geometric Representation  [[[PDF](https://arxiv.org/abs/2603.16871),[Page](https://cvlab-kaist.github.io/WorldCam/)]] ![Code](https://img.shields.io/github/stars/cvlab-kaist/WorldCam?style=social&label=Star)\n\n[arxiv 2026.03] VectorWorld: Efficient Streaming World Model via Diffusion Flow on Vector Graphs  [[PDF](https://arxiv.org/abs/2603.17652),[Page](https://jiangchaokang.github.io/VectorWorld/)] ![Code](https://img.shields.io/github/stars/jiangchaokang/VectorWorld?style=social&label=Star)\n\n[arxiv 2026.03] Out of Sight but Not Out of Mind: Hybrid Memory for Dynamic Video World Models  [[PDF](https://arxiv.org/abs/2603.25716),[Page](https://kj-chen666.github.io/Hybrid-Memory-in-Video-World-Models/)] ![Code](https://img.shields.io/github/stars/H-EmbodVis/HyDRA?style=social&label=Star)\n\n\n\n[arxiv 2026.03]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)\n\n\n## memory\n[arxiv 2025.06]  Context as Memory: Scene-Consistent Interactive Long Video Generation with Memory Retrieval [[PDF](https://arxiv.org/abs/2506.03141),[Page](https://context-as-memory.github.io/)]\n\n[arxiv 2025.06] Video World Models with Long-term Spatial Memory  [[PDF](http://arxiv.org/abs/2506.05284),[Page](https://spmem.github.io/)] \n\n[arxiv 2025.10]  EvoWorld: Evolving Panoramic World Generation with Explicit 3D Memory [[PDF](https://arxiv.org/abs/2510.01183),[Page](https://github.com/JiahaoPlus/EvoWorld)] ![Code](https://img.shields.io/github/stars/JiahaoPlus/EvoWorld?style=social&label=Star)\n\n[arxiv 2025.10]  Memory Forcing: Spatio-Temporal Memory for Consistent Scene Generation on Minecraft [[PDF](https://arxiv.org/abs/2510.03198),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)\n\n[arxiv 2025.11]  Learning Plug-and-play Memory for Guiding Video Diffusion Models [[PDF](https://arxiv.org/pdf/2511.19229),[Page](https://thrcle421.github.io/DiT-Mem-Web/)] ![Code](https://img.shields.io/github/stars/Thrcle421/DiT-Mem?style=social&label=Star)\n\n[arxiv 2025.12]  VL-JEPA: Joint Embedding Predictive Architecture for Vision-language [[PDF](https://arxiv.org/abs/2512.10942)]\n\n[arxiv 2026.01] Memory-V2V: Augmenting Video-to-Video Diffusion Models with Memory  [[PDF](https://arxiv.org/abs/2601.16296),[Page](https://dohunlee1.github.io/MemoryV2V/)] ![Code](https://img.shields.io/github/stars/DoHunLee1/Memory-V2V?style=social&label=Star)\n\n[arxiv 2026.03] Grounding World Simulation Models in a Real-World Metropolis  [[PDF](https://arxiv.org/abs/2603.15583),[Page](https://seoul-world-model.github.io/)] ![Code](https://img.shields.io/github/stars/naver-ai/seoul-world-model?style=social&label=Star)\n\n[arxiv 2026.03] MemRoPE: Training-Free Infinite Video Generation via Evolving Memory Tokens  [[PDF](https://arxiv.org/abs/2603.12513),[Page](https://memrope.github.io)] ![Code](https://img.shields.io/github/stars/YoungRaeKimm/MemRoPE?style=social&label=Star)\n\n[arxiv 2026.03] MosaicMem: Hybrid Spatial Memory for Controllable Video World Models  [[PDF](https://arxiv.org/abs/2603.17117),[Page](https://mosaicmem.github.io)]\n\n\n[arxiv 2026.03]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)\n\n\n## 3D generation\n[arxiv 2025.09]  LatticeWorld: A Multimodal Large Language Model-Empowered Framework for Interactive Complex World Generation [[PDF](https://arxiv.org/pdf/2509.05263)]\n\n[arxiv 2025.09] Hunyuan3D-Omni: A Unified Framework for Controllable Generation of 3D Assets  [[PDF](https://arxiv.org/abs/2509.21245),[Page](https://github.com/Tencent-Hunyuan/Hunyuan3D-Omni)] ![Code](https://img.shields.io/github/stars/Tencent-Hunyuan/Hunyuan3D-Omni?style=social&label=Star)\n\n[arxiv 2025.10] Seed3D 1.0: From Images to High-Fidelity Simulation-Ready 3D Assets  [[PDF](https://arxiv.org/abs/2510.19944),[Page](https://seed.bytedance.com/seed3d)] \n\n\n[arxiv 2026.03]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)\n\n\n## driving\n[arxiv 2024.10] FreeVS: Generative View Synthesis on Free Driving Trajectory [[PDF](https://arxiv.org/abs/2410.18079),[Page](https://freevs24.github.io/)]\n\n[arxiv 2024.11] DiffusionDrive: Truncated Diffusion Model for End-to-End Autonomous Driving  [[PDF](https://arxiv.org/abs/2411.15139),[Page](https://github.com/hustvl/DiffusionDrive)] ![Code](https://img.shields.io/github/stars/hustvl/DiffusionDrive?style=social&label=Star)\n\n[arxiv 2024.12]  InfinityDrive: Breaking Time Limits in Driving World Models [[PDF](https://arxiv.org/abs/2412.01522),[Page](https://metadrivescape.github.io/papers_project/InfinityDrive/page.html)] \n\n[arxiv 2024.12] Stag-1: Towards Realistic 4D Driving Simulation with Video Generation Model  [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/wzzheng/Stag?style=social&label=Star)\n\n[arxiv 2024.12]  UniMLVG: Unified Framework for Multi-view Long Video Generation with Comprehensive Control Capabilities for Autonomous Driving [[PDF](https://arxiv.org/abs/2412.04842),[Page](https://sensetime-fvg.github.io/UniMLVG/)]\n\n[arxiv 2024.12] UniScene: Unified Occupancy-centric Driving Scene Generation  [[PDF](https://arxiv.org/abs/2412.05435),[Page](https://arlo0o.github.io/uniscene/)]\n\n[arxiv 2024.12] ACT-BENCH: Towards Action Controllable World Models for Autonomous Driving  [[PDF](https://arxiv.org/abs/2412.05337)]\n\n[arxiv 2024.12] Physical-Informed Driving World Model  [[PDF](https://arxiv.org/abs/2412.08410)]\n\n[arxiv 2024.12]  Doe-1: Closed-Loop Autonomous Driving with Large World Model [[PDF](https://arxiv.org/pdf/2412.08643),[Page](https://wzzheng.net/Doe)] ![Code](https://img.shields.io/github/stars/wzzheng/Doe?style=social&label=Star)\n\n[arxiv 2024.12]  GaussianWorld: Gaussian World Model for Streaming 3D Occupancy Prediction [[PDF](https://arxiv.org/abs/2412.10373),[Page](https://github.com/zuosc19/GaussianWorld)] ![Code](https://img.shields.io/github/stars/zuosc19/GaussianWorld?style=social&label=Star)\n\n[arxiv 2024.12] StreetCrafter: Street View Synthesis with Controllable Video Diffusion Models  [[PDF](https://arxiv.org/pdf/2412.13188)]\n\n[arxiv 2025.01]  DrivingWorld: Constructing World Model for Autonomous Driving via Video GPT [[PDF](https://arxiv.org/abs/2412.19505),[Page](https://huxiaotaostasy.github.io/DrivingWorld/index.html)] ![Code](https://img.shields.io/github/stars/YvanYin/DrivingWorld?style=social&label=Star)\n\n[arxiv 2025.01] HERMES: A Unified Self-Driving World Model for Simultaneous 3D Scene Understanding and Generation  [[PDF](https://arxiv.org/abs/2501.14729),[Page](https://github.com/LMD0311/HERMES)] ![Code](https://img.shields.io/github/stars/LMD0311/HERMES?style=social&label=Star)\n\n[arxiv 2025.02] VaViM and VaVAM: Autonomous Driving through Video Generative Modeling  [[PDF](https://arxiv.org/abs/2502.15672),[Page](https://valeoai.github.io/vavim-vavam/)] ![Code](https://img.shields.io/github/stars/valeoai/VideoActionModel?style=social&label=Star)\n\n[arxiv 2025.03] MiLA: Multi-view Intensive-fidelity Long-term Video Generation World Model for Autonomous Driving  [[PDF](https://xiaomi-mlab.github.io/mila.github.io/),[Page](https://github.com/xiaomi-mlab/mila.github.io)] ![Code](https://img.shields.io/github/stars/xiaomi-mlab/mila.github.io?style=social&label=Star)\n\n[arxiv 2025.06] ReSim: Reliable World Simulation for Autonomous Driving  [[PDF](https://arxiv.org/abs/2506.09981),[Page](https://opendrivelab.com/ReSim)] ![Code](https://img.shields.io/github/stars/OpenDriveLab/ReSim?style=social&label=Star)\n\n[arxiv 2025.07]  Epona: Autoregressive Diffusion World Model for Autonomous Driving [[PDF](https://arxiv.org/abs/2506.24113),[Page](https://kevin-thu.github.io/Epona/)] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)\n\n[arxiv 2025.07] A Survey on Vision-Language-Action Models for Autonomous Drivin  [[PDF](https://arxiv.org/abs/2506.24044),[Page](https://github.com/JohnsonJiang1996/Awesome-VLA4AD)] ![Code](https://img.shields.io/github/stars/JohnsonJiang1996/Awesome-VLA4AD?style=social&label=Star)\n\n[arxiv 2025.08]  ImagiDrive: A Unified Imagination-and-Planning Framework for Autonomous [[PDF](https://arxiv.org/pdf/2508.11428),[Page](https://github.com/fudan-zvg/ImagiDrive)] ![Code](https://img.shields.io/github/stars/fudan-zvg/ImagiDrive?style=social&label=Star)\n\n[arxiv 2025.10] Drive&Gen: Co-Evaluating End-to-End Driving and Video Generation Models  [[PDF](https://arxiv.org/abs/2510.06209)]\n\n[arxiv 2025.10]  DriveVLA-W0: World Models Amplify Data Scaling Law in Autonomous Driving [[PDF](https://arxiv.org/abs/2510.12796),[Page](https://github.com/BraveGroup/DriveVLA-W0)] ![Code](https://img.shields.io/github/stars/BraveGroup/DriveVLA-W0?style=social&label=Star)\n\n[arxiv 2026.01] MAD: Motion Appearance Decoupling for efficient Driving World Models  [[PDF](https://arxiv.org/abs/2601.09452)]\n\n[arxiv 2026.01]  Drive-JEPA: Video JEPA Meets Multimodal Trajectory Distillation for End-to-End Driving [[PDF](https://arxiv.org/abs/2601.22032),[Page](https://github.com/linhanwang/Drive-JEPA)] ![Code](https://img.shields.io/github/stars/linhanwang/Drive-JEPA?style=social&label=Star)\n\n[arxiv 2026.01]  UniDriveDreamer: A Single-Stage Multimodal World Model for Autonomous Driving [[PDF](https://arxiv.org/abs/2602.02002)]\n\n[arxiv 2026.01]  InstaDrive: Instance-Aware Driving World Models for Realistic and Consistent Video Generation [[PDF](https://arxiv.org/abs/2602.03242),[Page](https://shanpoyang654.github.io/InstaDrive/page.html)] ![Code](https://img.shields.io/github/stars/shanpoyang654/InstaDrive?style=social&label=Star)\n\n[arxiv 2026.03] AutoMoT: A Unified Vision-Language-Action Model with Asynchronous Mixture-of-Transformers for End-to-End Autonomous Driving  [[PDF](https://arxiv.org/abs/2603.14851),[Page](https://automot-website.github.io/)]\n\n[arxiv 2026.03] WorldCache: Content-Aware Caching for Accelerated Video World Models  [[PDF](https://arxiv.org/abs/2603.22286),[Page](https://umair1221.github.io/World-Cache/)]\n\n[arxiv 2026.03] Drive My Way: Preference Alignment of Vision-Language-Action Model for Personalized Driving  [[PDF](https://arxiv.org/abs/2603.25740),[Page](https://dmw-cvpr.github.io/)]\n\n[arxiv 2026.03] Latent-WAM: Latent World Action Modeling for End-to-End Autonomous Driving  [[PDF](https://arxiv.org/abs/2603.24581)]\n\n[arxiv 2026.03]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)\n\n\n## Feedback\n\n[arxiv 2024.12]  Improving Dynamic Object Interactions in Text-to-Video Generation with AI Feedback [[PDF](https://arxiv.org/abs/2412.02617),[Page](https://sites.google.com/view/aif-dynamic-t2v/)] \n\n\n[arxiv 2024.12]  LiFT: Leveraging Human Feedback for Text-to-Video Model Alignment [[PDF](https://arxiv.org/pdf/2412.04814),[Page](https://codegoat24.github.io/LiFT/)] ![Code](https://img.shields.io/github/stars/CodeGoat24/LiFT?style=social&label=Star)\n\n[arxiv 2024.12] OnlineVPO: Align Video Diffusion Model with Online Video-Centric Preference Optimization  [[PDF](https://arxiv.org/abs/2412.15159),[Page](https://onlinevpo.github.io/)] \n\n[arxiv 2024.12] Prompt-A-Video: Prompt Your Video Diffusion Model via Preference-Aligned LLM  [[PDF](https://arxiv.org/abs/2412.15156),[Page](https://arxiv.org/abs/2412.15156)] \n\n[arxiv 2025.01] Personalized Preference Fine-tuning of Diffusion Models  [[PDF](https://arxiv.org/pdf/2501.06655)]\n\n[arxiv 2025.01] Improving Video Generation with Human Feedback  [[PDF](https://arxiv.org/abs/2501.13918),[Page](https://gongyeliu.github.io/videoalign/)] \n\n[arxiv 2025.02]  HuViDPO: Enhancing Video Generation through Direct Preference Optimization for Human-Centric Alignment [[PDF](https://arxiv.org/abs/2502.01690)]\n\n[arxiv 2025.02] CHATS: Combining Human-Aligned Optimization and Test-Time Sampling for Text-to-Image Generation  [[PDF](https://arxiv.org/pdf/2502.12579)]\n\n[arxiv 2025.05] DanceGRPO: Unleashing GRPO on Visual Generation  [[PDF](https://arxiv.org/abs/2505.07818),[Page](https://dancegrpo.github.io/)] ![Code](https://img.shields.io/github/stars/XueZeyue/DanceGRPO?style=social&label=Star)\n\n[arxiv 2025.06] DenseDPO: Fine-Grained Temporal Preference Optimization for Video Diffusion Models  [[PDF](https://arxiv.org/abs/2506.03517),[Page](https://snap-research.github.io/DenseDPO/)] \n\n[arxiv 2025.06]  RDPO: Real Data Preference Optimization for Physics Consistency Video Generation [[PDF](https://arxiv.org/abs/%3CARXIV%20PAPER%20ID%3E),[Page](https://wwenxu.github.io/RDPO/)] \n\n[arxiv 2025.09] RewardDance: Reward Scaling in Visual Generation  [[PDF](https://arxiv.org/pdf/2509.08826)]\n\n[arxiv 2025.10]  VideoReward Thinker: Boosting Video Reward Models through Thinking-with-Image Reasoning [[PDF](https://arxiv.org/abs/2510.10518),[Page](https://github.com/qunzhongwang/vr-thinker)] ![Code](https://img.shields.io/github/stars/qunzhongwang/vr-thinker?style=social&label=Star)\n\n[arxiv 2025.10] PhysMaster: Mastering Physical Representation for Video Generation via Reinforcement Learning  [[PDF](https://arxiv.org/pdf/2510.13809),[Page](https://sihuiji.github.io/PhysMaster-Page/)] ![Code](https://img.shields.io/github/stars/KwaiVGI/PhysMaster?style=social&label=Star)\n\n[arxiv 2025.10] RealDPO: Real or Not Real, that is the Preference  [[PDF](https://arxiv.org/abs/2510.14955),[Page](https://vchitect.github.io/RealDPO-Project/)] ![Code](https://img.shields.io/github/stars/Vchitect/RealDPO?style=social&label=Star)\n\n[arxiv 2025.10] Identity-GRPO: Optimizing Multi-Human Identity-preserving Video Generation via Reinforcement Learning  [[PDF](https://arxiv.org/pdf/2510.14256),[Page](https://ali-videoai.github.io/identity_page/)] ![Code](https://img.shields.io/github/stars/alibaba/identity-grpo?style=social&label=Star)\n\n[arxiv 2025.10]  Identity-Preserving Image-to-Video Generation via Reward-Guided Optimization [[PDF](https://arxiv.org/abs/2510.14255),[Page](https://ipro-alimama.github.io/)] \n\n[arxiv 2025.10]  Omni-Reward: Towards Generalist Omni-Modal Reward Modeling with Free-Form Preferences [[PDF](https://arxiv.org/abs/2510.23451),[Page](https://omnireward.github.io/)] ![Code](https://img.shields.io/github/stars/HongbangYuan/OmniReward?style=social&label=Star)\n\n[arxiv 2025.11]  Growing with the Generator: Self-paced GRPO for Video Generation [[PDF](https://arxiv.org/pdf/2511.19356)]\n\n[arxiv 2025.12]  PhyGDPO: Physics-Aware Groupwise Direct Preference Optimization for Physically Consistent Text-to-Video Generation [[PDF](https://arxiv.org/abs/2512.24551),[Page](https://caiyuanhao1998.github.io/project/PhyGDPO/)] ![Code](https://img.shields.io/github/stars/caiyuanhao1998/Open-PhyGDPO?style=social&label=Star)\n\n[arxiv 2026.01] TAGRPO: Boosting GRPO on Image-to-Video Generation with Direct Trajectory Alignment  [[PDF](https://arxiv.org/pdf/2601.05729)]\n\n[arxiv 2026.01]  PhysRVG: Physics-Aware Unified Reinforcement Learning for Video Generative Models [[PDF](https://arxiv.org/abs/2601.11087),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)\n\n[arxiv 2026.01] Human detectors are surprisingly powerful reward models  [[PDF](https://arxiv.org/pdf/2601.14037),[Page](https://huda-reward-model.github.io/)] \n\n[arxiv 2026.03]  FlowPortrait: Reinforcement Learning for Audio-Driven Portrait Video Generation [[PDF](https://arxiv.org/abs/2603.00159)]\n\n[arxiv 2026.03] SHIFT: Motion Alignment in Video Diffusion Models with Adversarial Hybrid Fine-Tuning  [[PDF](https://arxiv.org/abs/2603.17426)]\n\n[arxiv 2026.03]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)\n\n\n# improving AR \n[arxiv 2026.03] AR-CoPO: Align Autoregressive Video Generation with Contrastive Policy Optimization  [[PDF](https://arxiv.org/abs/2603.17461)]\n\n[arxiv 2026.03] Astrolabe: Steering Forward-Process RL for Distilled Autoregressive Video Models  [[PDF](https://arxiv.org/abs/2603.17051),[Page](https://franklinz233.github.io/projects/astrolabe/)] ![Code](https://img.shields.io/github/stars/franklinz233/Astrolabe?style=social&label=Star)\n\n[arxiv 2026.03]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)\n\n\n\n## CV Related \n[arxiv 2022.12; ByteDace]PV3D: A 3D GENERATIVE MODEL FOR PORTRAIT VIDEO GENERATION [[PDF](https://arxiv.org/pdf/2212.06384.pdf)]\n\n[arxiv 2022.12]MM-Diffusion: Learning Multi-Modal Diffusion Models for Joint Audio and Video Generation[[PDF](https://arxiv.org/pdf/2212.09478.pdf)]\n\n[arxiv 2022.12]Tune-A-Video: One-Shot Tuning of Image Diffusion Models for Text-to-Video Generation [[PDF](https://arxiv.org/pdf/2212.11565.pdf), [Page](https://tuneavideo.github.io/)]\n\n[arxiv 2023.01]Diffused Heads: Diffusion Models Beat GANs on Talking-Face Generation [[PDF](https://arxiv.org/pdf/2301.03396.pdf), [Page](https://mstypulkowski.github.io/diffusedheads)]\n\n[arxiv 2023.01]DiffTalk: Crafting Diffusion Models for Generalized Talking Head Synthesis [[PDF](https://arxiv.org/pdf/2301.03786.pdf), [Page](https://sstzal.github.io/DiffTalk/)]\n\n[arxiv 2023.02 Google]Scaling Vision Transformers to 22 Billion Parameters [[PDF](https://arxiv.org/abs/2302.05442)]\n\n[arxiv 2023.05]VDT: An Empirical Study on Video Diffusion with Transformers [[PDF](https://arxiv.org/abs/2305.13311), [code](https://github.com/RERV/VDT)]\n\n[arxiv 2024] MAGVIT-V2 : Language Model Beats Diffusion -- Tokenizer is Key to Visual Generation [[PDF](https://arxiv.org/abs/2310.05737)]\n\n[arxiv 2024.08]Sapiens: Foundation for Human Vision Models [[PDF](https://arxiv.org/abs/2408.12569),[Page](https://about.meta.com/realitylabs/codecavatars/sapiens)]\n\n\n[arxiv 2024.10] ReferEverything: Towards Segmenting Everything We Can Speak of in Videos [[PDF](https://arxiv.org/abs/2410.23287),[Page](https://miccooper9.github.io/projects/ReferEverything/)]\n\n[arxiv 2024.10]VideoSAM: A Large Vision Foundation Model for High-Speed Video Segmentation  [[PDF](https://arxiv.org/abs/2410.21304),[Page](https://github.com/chikap421/videosam)]\n\n\n[arxiv 2024.11]  Generative Omnimatte: Learning to Decompose Video into Layers [[PDF](https://arxiv.org/abs/2411.16683),[Page](https://gen-omnimatte.github.io/)]\n\n[arxiv 2025.01] Sa2VA: Marrying SAM2 with LLaVA for Dense Grounded Understanding of Images and Videos  [[PDF](https://arxiv.org/abs/2501.04001),[Page](https://lxtgh.github.io/project/sa2va/)] ![Code](https://img.shields.io/github/stars/magic-research/Sa2VA?style=social&label=Star)\n\n\n[arxiv 2026.03]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)\n\n\n## NLP related\n[arxiv 2022.10]DIFFUSEQ: SEQUENCE TO SEQUENCE TEXT GENERATION WITH DIFFUSION MODELS [[PDF](https://arxiv.org/pdf/2210.08933.pdf)]\n\n[arxiv 2023.02]The Flan Collection: Designing Data and Methods for Effective Instruction Tuning [[PDF](https://arxiv.org/pdf/2301.13688.pdf)]\n\n\n## Speech \n[arxiv 2023.01]Neural Codec Language Models are Zero-Shot Text to Speech Synthesizers[[PDF](https://arxiv.org/abs/2301.02111), [Page](https://valle-demo.github.io/)]\n\n[arxiv 2024.09]EMOVA: Empowering Language Models to See, Hear and Speak with Vivid Emotions[[PDF](https://arxiv.org/abs/2409.18042), [Page](https://emova-ollm.github.io/)]\n\n## embodied AI related \n[arxiv 2025.11]  MiMo-Embodied: X-Embodied Foundation Model Technical Report [[PDF](https://arxiv.org/abs/2511.16518),[Page](https://github.com/XiaomiMiMo/MiMo-Embodied)] ![Code](https://img.shields.io/github/stars/XiaomiMiMo/MiMo-Embodied?style=social&label=Star)\n\n[arxiv 2026.01] Causal World Modeling for Robot Control  [[PDF](https://github.com/Robbyant/lingbot-va/blob/master/LingBot_VA_paper.pdf),[Page](https://technology.robbyant.com/lingbot-va)] ![Code](https://img.shields.io/github/stars/Robbyant/lingbot-va?style=social&label=Star)\n\n[arxiv 2026.02]  WorldArena: A Unified Benchmark for Evaluating Perception and Functional Utility of Embodied World Models [[PDF](https://arxiv.org/abs/2602.08971)]\n\n[arxiv 2026.03]  RealWonder: Real-Time Physical Action-Conditioned Video Generation [[PDF](https://arxiv.org/abs/2603.05449),[Page](https://liuwei283.github.io/RealWonder/)] ![Code](https://img.shields.io/github/stars/liuwei283/RealWonder?style=social&label=Star)\n\n\n[arxiv 2026.03] Motion Forcing: A Decoupled Framework for Robust Video Generation in Motion Dynamics  [[PDF](https://arxiv.org/abs/2603.10408),[Page](https://tianshuo-xu.github.io/Motion-Forcing/)] ![Code](https://img.shields.io/github/stars/Tianshuo-Xu/Motion-Forcing?style=social&label=Star)\n\n\n[arxiv 2026.03]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)\n\n"
  },
  {
    "path": "virtual_human.md",
    "content": "## Dataset\n[arxiv 2025.07]  Seamless Interaction: Dyadic Audiovisual Motion Modeling and Large-Scale Dataset [[PDF](https://arxiv.org/abs/2506.22554),[Page](https://github.com/facebookresearch/seamless_interaction)] ![Code](https://img.shields.io/github/stars/facebookresearch/seamless_interaction?style=social&label=Star)\n\n[arxiv 2025.07] Go to Zero: Towards Zero-shot Motion Generation with Million-scale Data  [[PDF](https://arxiv.org/abs/2507.07095),[Page](https://github.com/VankouF/MotionMillion-Codes)] ![Code](https://img.shields.io/github/stars/VankouF/MotionMillion-Codes?style=social&label=Star)\n\n[arxiv 2026.03] Face-to-Face: A Video Dataset for Multi-Person Interaction Modeling  [[PDF](https://arxiv.org/abs/2603.14794)]\n\n[arxiv 2026.03]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)\n\n\n\n## Gaussian Face \n[arxiv 2025.01] PERSE: Personalized 3D Generative Avatars from A Single Portrait  [[PDF](https://arxiv.org/abs/2412.21206),[Page](https://hyunsoocha.github.io/perse/)] ![Code](https://img.shields.io/github/stars/snuvclab/perse?style=social&label=Star)\n\n[arxiv 2026.03]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)\n\n\n\n## Body \n\n[arxiv 2024.10] MotionAura: Generating High-Quality and Motion Consistent Videos using Discrete Diffusion  [[PDF](https://arxiv.org/abs/2410.07659)]\n\n[arxiv 2024.10]  ControlMM: Controllable Masked Motion Generation [[PDF](http://arxiv.org/abs/2312.03596),[Page](https://exitudio.github.io/ControlMM-page/)]\n\n[arxiv 2024.10] MotionBank: A Large-scale Video Motion Benchmark with Disentangled Rule-based Annotations  [[PDF](),[Page]()]\n\n[arxiv 2024.10] Multi-modal Pose Diffuser: A Multimodal Generative Conditional Pose Prior  [[PDF](https://arxiv.org/abs/2410.14540)]\n\n[arxiv 2024.10] LEAD: Latent Realignment for Human Motion Diffusion  [[PDF](https://arxiv.org/pdf/2410.14508)]\n\n[arxiv 2024.10]  MotionCLR: Motion Generation and Training-free Editing via Understanding Attention Mechanisms [[PDF](https://arxiv.org/abs/2410.18977),[Page](https://lhchen.top/MotionCLR/)]\n\n[arxiv 2024.11]  KMM: Key Frame Mask Mamba for Extended Motion Generation [[PDF](https://arxiv.org/abs/2411.06481),[Page](https://steve-zeyu-zhang.github.io/KMM/)]\n\n[arxiv 2024.11] Rethinking Diffusion for Text-Driven Human Motion Generation  [[PDF](https://arxiv.org/abs/2411.16575)]\n\n[arxiv 2024.11] SMGDiff: Soccer Motion Generation using diffusion probabilistic models  [[PDF](https://arxiv.org/abs/2411.16216)]\n\n[arxiv 2024.11] DRiVE: Diffusion-based Rigging Empowers Generation of Versatile and Expressive Characters  [[PDF](https://arxiv.org/abs/2411.17423),[Page](https://driveavatar.github.io/)] ![Code](https://img.shields.io/github/stars/DRiVEAvatar/DRiVEAvatar.github.io?style=social&label=Star)\n\n[arxiv 2024.11]  UniPose: A Unified Multimodal Framework for Human Pose Comprehension, Generation and Editing [[PDF](https://arxiv.org/abs/2411.16781)] \n\n[arxiv 2024.12] AToM: Aligning Text-to-Motion Model at Event-Level with GPT-4Vision Reward  [[PDF](https://arxiv.org/abs/2411.18654),[Page](https://atom-motion.github.io/)] ![Code](https://img.shields.io/github/stars/VincentHancoder/AToM?style=social&label=Star)\n\n[arxiv 2024.12]  AdaVLN: Towards Visual Language Navigation in Continuous Indoor Environments with Moving Humans\n [[PDF](https://arxiv.org/abs/2411.18539),[Page](https://github.com/dillonloh/AdaVLN)] ![Code](https://img.shields.io/github/stars/dillonloh/AdaVLN?style=social&label=Star)\n\n[arxiv 2024.12]  InfiniDreamer: Arbitrarily Long Human Motion Generation via Segment Score Distillation [[PDF](https://arxiv.org/abs/2411.18303)] \n\n[arxiv 2024.12] One Shot, One Talk: Whole-body Talking Avatar from a Single Image  [[PDF](https://arxiv.org/abs/2412.01106),[Page](https://ustc3dv.github.io/OneShotOneTalk/)] \n\n[arxiv 2024.12]  SoPo: Text-to-Motion Generation Using Semi-Online Preference Optimization [[PDF](https://sopo-motion.github.io/),[Page](https://sopo-motion.github.io/)] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)\n\n[arxiv 2024.12]  Retrieving Semantics from the Deep: an RAG Solution for Gesture Synthesis [[PDF](https://arxiv.org/abs/2403.17936),[Page](https://vcai.mpi-inf.mpg.de/projects/RAG-Gesture/)] \n\n[arxiv 2024.12]  CoMA: Compositional Human Motion Generation with Multi-modal Agents [[PDF](https://arxiv.org/abs/2412.07320),[Page](https://gabrie-l.github.io/coma-page/)] ![Code](https://img.shields.io/github/stars/Siwensun/CoMA?style=social&label=Star)\n\n[arxiv 2024.12]  Move-in-2D: 2D-Conditioned Human Motion Generation [[PDF](https://arxiv.org/abs/2412.13185),[Page](https://hhsinping.github.io/Move-in-2D/)] \n\n[arxiv 2024.12] Motion-2-to-3: Leveraging 2D Motion Data to Boost 3D Motion Generation  [[PDF](https://arxiv.org/abs/2412.13111),[Page](https://zju3dv.github.io/Motion-2-to-3/)] ![Code](https://img.shields.io/github/stars/zju3dv/Motion-2-to-3?style=social&label=Star)\n\n[arxiv 2024.12] ScaMo: Exploring the Scaling Law in Autoregressive Motion Generation Model  [[PDF](https://arxiv.org/abs/2412.14559),[Page](https://shunlinlu.github.io/ScaMo/)] ![Code](https://img.shields.io/github/stars/shunlinlu/ScaMo_code?style=social&label=Star)\n\n[arxiv 2025.01]  Make-A-Character 2: Animatable 3D Character Generation From a Single Image [[PDF](https://arxiv.org/pdf/2501.07870)]\n\n[arxiv 2025.01]  FlexMotion: Lightweight, Physics-Aware, and Controllable Human Motion Generation [[PDF](https://arxiv.org/abs/2501.16778)]\n\n[arxiv 2025.02]  MotionLab: Unified Human Motion Generation and Editing via the Motion-Condition-Motion Paradigm [[PDF](https://arxiv.org/abs/2502.02358),[Page](https://diouo.github.io/motionlab.github.io/)] ![Code](https://img.shields.io/github/stars/Diouo/MotionLab?style=social&label=Star)\n\n[arxiv 2025.02] CASIM: Composite Aware Semantic Injection for Text to Motion Generation  [[PDF](https://arxiv.org/abs/2502.02063),[Page](https://cjerry1243.github.io/casim_t2m/)] ![Code](https://img.shields.io/github/stars/cjerry1243/casim_t2m?style=social&label=Star)\n\n[arxiv 2025.02] Free-T2M: Frequency Enhanced Text-to-Motion Diffusion Model With Consistency Loss  [[PDF](https://arxiv.org/abs/2501.18232),[Page](https://github.com/Hxxxz0/Free-T2m)] ![Code](https://img.shields.io/github/stars/Hxxxz0/Free-T2m?style=social&label=Star)\n\n[arxiv 2025.02] Fg-T2M++: LLMs-Augmented Fine-Grained Text Driven Human Motion Generation  [[PDF](https://arxiv.org/abs/2502.05534)]\n\n[arxiv 2025.03] StickMotion: Generating 3D Human Motions by Drawing a Stickman  [[PDF](https://arxiv.org/abs/2503.04829)]\n\n[arxiv 2025.03]  HumanMM: Global Human Motion Recovery from Multi-shot Videos [[PDF](https://arxiv.org/abs/2503.07597),[Page](https://zhangyuhong01.github.io/HumanMM/)] ![Code](https://img.shields.io/github/stars/zhangyuhong01/HumanMM-code?style=social&label=Star)\n\n[arxiv 2025.03]  PersonaBooth: Personalized Text-to-Motion Generation [[PDF](https://arxiv.org/abs/2503.07390),[Page](https://boeun-kim.github.io/page-PersonaBooth/)] ![Code](https://img.shields.io/github/stars/Boeun-Kim/MoST?style=social&label=Star)\n\n[arxiv 2025.03] Motion Anything: Any to Motion Generation  [[PDF](https://arxiv.org/abs/2503.06955),[Page](https://steve-zeyu-zhang.github.io/MotionAnything/)] ![Code](https://img.shields.io/github/stars/steve-zeyu-zhang/MotionAnything?style=social&label=Star)\n\n[arxiv 2025.03] HERO: Human Reaction Generation from Videos  [[PDF](https://arxiv.org/pdf/2503.08270),[Page](https://jackyu6.github.io/HERO/)] \n\n[arxiv 2025.03]  NIL: No-data Imitation Learning by Leveraging Pre-trained Video Diffusion Models [[PDF](https://arxiv.org/pdf/2503.10626)]\n\n[arxiv 2025.03] ACMo: Attribute Controllable Motion Generation  [[PDF](https://arxiv.org/abs/2503.11038),[Page](https://mjwei3d.github.io/ACMo/)] ![Code](https://img.shields.io/github/stars/MingjieWe/ACMo?style=social&label=Star)\n\n[arxiv 2025.03] SALAD: Skeleton-aware Latent Diffusion for Text-driven Motion Generation and Editing\n  [[PDF](https://arxiv.org/abs/2503.13836),[Page](https://seokhyeonhong.github.io/projects/salad/)] ![Code](https://img.shields.io/github/stars/seokhyeonhong/salad?style=social&label=Star)\n\n[arxiv 2025.03] MAG: Multi-Modal Aligned Autoregressive Co-Speech Gesture Generation without Vector Quantization  [[PDF](https://arxiv.org/pdf/2503.14040)]\n\n[arxiv 2025.03] MotionStreamer: Streaming Motion Generation via Diffusion-based Autoregressive Model in Causal Latent Space  [[PDF](https://arxiv.org/abs/2503.15451),[Page](https://zju3dv.github.io/MotionStreamer/)] ![Code](https://img.shields.io/github/stars/Li-xingXiao/272-dim-Motion-Representation?style=social&label=Star)\n\n[arxiv 2025.04]  Shape My Moves: Text-Driven Shape-Aware Synthesis of Human Motions [[PDF](https://arxiv.org/abs/2504.03639),[Page](https://shape-move.github.io/)] \n\n[arxiv 2025.05] Deterministic-to-Stochastic Diverse Latent Feature Mapping for Human Motion Synthesis  [[PDF](https://arxiv.org/abs/2505.00998)]\n\n[arxiv 2025.05]  GENMO: A GENeralist Model for Human MOtion [[PDF](https://arxiv.org/abs/2505.01425),[Page](https://research.nvidia.com/labs/dair/genmo/)] \n\n[arxiv 2025.05]  ReAlign: Bilingual Text-to-Motion Generation via Step-Aware Reward-Guided Alignment [[PDF](https://arxiv.org/abs/2505.04974),[Page](https://github.com/wengwanjiang/ReAlign)] ![Code](https://img.shields.io/github/stars/wengwanjiang/ReAlign?style=social&label=Star)\n\n[arxiv 2025.05] MoCLIP: Motion-Aware Fine-Tuning and Distillation of CLIP for Human Motion Generation  [[PDF](https://arxiv.org/abs/2505.10810)]\n\n[arxiv 2025.06]  Motion-R1: Chain-of-Thought Reasoning and Reinforcement Learning for Human Motion Generation [[PDF](https://arxiv.org/abs/2506.10353),[Page](https://motion-r1.github.io/)] ![Code](https://img.shields.io/github/stars/GigaAI-Research/Motion-R1?style=social&label=Star)\n\n[arxiv 2025.06]  PlanMoGPT: Flow-Enhanced Progressive Planning for Text to Motion Synthesis [[PDF](http://arxiv.org/abs/2506.17912),[Page](https://planmogpt.github.io/)] ![Code](https://img.shields.io/github/stars/PlanMoGPT/PlanMoGPT.github.io?style=social&label=Star)\n\n[arxiv 2025.07]  MotionGPT3: Human Motion as a Second Modality [[PDF](https://arxiv.org/abs/2506.24086),[Page](https://motiongpt3.github.io/)] ![Code](https://img.shields.io/github/stars/OpenMotionLab/MotionGPT3?style=social&label=Star)\n\n[arxiv 2025.07] Go to Zero: Towards Zero-shot Motion Generation with Million-scale Data  [[PDF](https://arxiv.org/abs/2507.07095),[Page](https://github.com/VankouF/MotionMillion-Codes)] ![Code](https://img.shields.io/github/stars/VankouF/MotionMillion-Codes?style=social&label=Star)\n\n[arxiv 2025.07] SnapMoGen: Human Motion Generation from Expressive Texts  [[PDF](https://www.arxiv.org/abs/2507.09122),[Page](https://snap-research.github.io/SnapMoGen/)] ![Code](https://img.shields.io/github/stars/snap-research/SnapMoGen?style=social&label=Star)\n\n[arxiv 2025.07] MOSPA: Human Motion Generation Driven by Spatial Audio  [[PDF](https://arxiv.org/pdf/2507.11949)]\n\n[arxiv 2025.07]  ReMoMask: Retrieval-Augmented Masked Motion Generation [[PDF](https://arxiv.org/abs/2508.02605),[Page](https://aigeeksgroup.github.io/ReMoMask)] ![Code](https://img.shields.io/github/stars/AIGeeksGroup/ReMoMask?style=social&label=Star)\n\n[arxiv 2025.08]  Being-M0.5: A Real-Time Controllable Vision-Language-Motion Model [[PDF](https://arxiv.org/abs/2508.07863),[Page](https://beingbeyond.github.io/Being-M0.5/)] ![Code](https://img.shields.io/github/stars/BeingBeyond/Being-M0.5?style=social&label=Star)\n\n[arxiv 2025.08] InterSyn: Interleaved Learning for Dynamic Motion Synthesis in the Wild  [[PDF](https://arxiv.org/abs/2508.10297),[Page](https://myy888.github.io/InterSyn/)] \n\n[arxiv 2025.08]  Motion2Motion: Cross-topology Motion Transfer with Sparse Correspondence [[PDF](https://arxiv.org/abs/2508.13139)]\n\n[arxiv 2025.08]  DanceEditor: Towards Iterative Editable Music-driven Dance Generation with Open-Vocabulary Descriptions [[PDF](),[Page](https://lzvsdy.github.io/DanceEditor/)] ![Code](https://img.shields.io/github/stars/LZVSDY/DanceEditor?style=social&label=Star)\n\n[arxiv 2025.10] OmniMotion: Multimodal Motion Generation with Continuous Masked Autoregression  [[PDF](https://arxiv.org/abs/2510.14954)]\n\n[arxiv 2025.10] OmniMotion-X: Versatile Multimodal Whole-Body Motion Generation  [[PDF](https://arxiv.org/abs/2510.19789),[Page](https://github.com/GuoweiXu368/OmniMotion-X)] ![Code](https://img.shields.io/github/stars/GuoweiXu368/OmniMotion-X?style=social&label=Star)\n\n[arxiv 2025.10] Group Inertial Poser: Multi-Person Pose and Global Translation from Sparse Inertial Sensors and Ultra-Wideband Ranging  [[PDF](https://arxiv.org/abs/2510.21654),[Page](https://github.com/eth-siplab/GroupInertialPoser)] \n\n[arxiv 2025.11]  ReAlign: Text-to-Motion Generation via Step-Aware Reward-Guided Alignment [[PDF](https://arxiv.org/abs/2511.19217),[Page](https://github.com/wengwanjiang/ReAlign)] ![Code](https://img.shields.io/github/stars/wengwanjiang/ReAlign?style=social&label=Star)\n\n[arxiv 2025.12]  Interact2Ar: Full-Body Human-Human Interaction Generation via Autoregressive Diffusion Models [[PDF](https://arxiv.org/pdf/2512.19692),[Page](https://pabloruizponce.com/papers/Interact2Ar)] ![Code](https://img.shields.io/github/stars/pabloruizponce/Interact2Ar?style=social&label=Star)\n\n[arxiv 2025.12] OmniMoGen: Unifying Human Motion Generation via Learning from Interleaved Text-Motion Instructions  [[PDF](https://arxiv.org/pdf/2512.19159)]\n\n[arxiv 2025.12] EchoMotion: Unified Human Video and Motion Generation via Dual-Modality Diffusion Transformer  [[PDF](https://arxiv.org/abs/2512.18814),[Page](https://yuxiaoyang23.github.io/EchoMotion-webpage/)]\n\n[arxiv 2025.12]  HY-Motion 1.0: Scaling Flow Matching Models for Text-To-Motion Generation [[PDF](https://arxiv.org/pdf/2512.23464),[Page](https://github.com/Tencent-Hunyuan/HY-Motion-1.0)] ![Code](https://img.shields.io/github/stars/Tencent-Hunyuan/HY-Motion-1.0?style=social&label=Star)\n\n[arxiv 2025.12]  Think Before You Move: Latent Motion Reasoning for Text-to-Motion Generation [[PDF](https://chenhaoqcdyq.github.io/LMR/),[Page](https://chenhaoqcdyq.github.io/LMR/)] ![Code](https://img.shields.io/github/stars/chenhaoqcdyq/lmr-codes?style=social&label=Star)\n\n[arxiv 2026.01] FrankenMotion: Part-level Human Motion Generation and Composition  [[PDF](https://arxiv.org/pdf/2601.10909),[Page](https://coral79.github.io/frankenmotion/)] ![Code](https://img.shields.io/github/stars/Coral79/FrankenMotion-Code?style=social&label=Star)\n\n[arxiv 2026.01]  Superman: Unifying Skeleton and Vision for Human Motion Perception and Generation [[PDF](https://arxiv.org/abs/2602.02401)]\n\n[arxiv 2026.01]  DiMo: Discrete Diffusion Modeling for Motion Generation and Understanding [[PDF](https://arxiv.org/abs/2602.04188)]\n\n[arxiv 2026.02]  SARAH: Spatially Aware Real-time Agentic Humans [[PDF](https://arxiv.org/abs/2602.18432),[Page](https://evonneng.github.io/sarah/)] \n\n[arxiv 2026.03]  U-Mind: A Unified Framework for Real-Time Multimodal Interaction with Audiovisual Generation [[PDF](https://arxiv.org/abs/2602.23739)]\n\n[arxiv 2026.03] Kimodo: Scaling Controllable Human Motion Generation  [[PDF](https://research.nvidia.com/labs/sil/projects/kimodo/assets/kimodo_tech_report.pdf),[Page](https://research.nvidia.com/labs/sil/projects/kimodo/)] ![Code](https://img.shields.io/github/stars/nv-tlabs/kimodo?style=social&label=Star)\n\n[arxiv 2026.03] ReactMotion: Generating Reactive Listener Motions from Speaker Utterance  [[PDF](https://arxiv.org/abs/2603.15083),[Page](https://reactmotion.github.io/)] ![Code](https://img.shields.io/github/stars/awakening-ai/ReactMotion?style=social&label=Star)\n\n[arxiv 2026.03] Riemannian Motion Generation: A Unified Framework for Human Motion Representation and Generation via Riemannian Flow Matching  [[PDF](https://arxiv.org/abs/2603.15016),[Page](https://frank-miao.github.io/RMG-Project-Page/)] \n\n[arxiv 2026.03] ActionPlan: Future-Aware Streaming Motion Synthesis via Frame-Level Action Planning  [[PDF](https://arxiv.org/abs/2603.13500),[Page](https://coral79.github.io/ActionPlan/)] ![Code](https://img.shields.io/github/stars/Coral79/ActionPlan-Code?style=social&label=Star)\n\n[arxiv 2026.03] UMO: Unified In-Context Learning Unlocks Motion Foundation Model Priors  [[[PDF](https://arxiv.org/abs/2603.15975),[Page](https://oliver-cong02.github.io/UMO.github.io/)]]\n\n[arxiv 2026.03] LaMoGen: Language to Motion Generation Through LLM-Guided Symbolic Inference  [[PDF](https://arxiv.org/abs/2603.11605),[Page](https://jjkislele.github.io/LaMoGen/)] ![Code](https://img.shields.io/github/stars/jjkislele/LaMoGen?style=social&label=Star)\n\n[arxiv 2026.03] UniMotion: A Unified Framework for Motion-Text-Vision Understanding and Generation  [[PDF](https://arxiv.org/abs/2603.22282),[Page](https://wangzy01.github.io/UniMotion/)]\n\n[arxiv 2026.03]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)\n\n\n## audio-to-gesture \n[arxiv 2024.10] Emphasizing Semantic Consistency of Salient Posture for Speech-Driven Gesture Generation  [[PDF](https://arxiv.org/abs/2410.13786)]\n\n[arxiv 2025.03]  ExGes: Expressive Human Motion Retrieval and Modulation for Audio-Driven Gesture Synthesis [[PDF](https://arxiv.org/pdf/2503.06499)]\n\n\n[arxiv 2026.03]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)\n\n\n\n\n## Hands \n[arxiv 2024.10]Learning Interaction-aware 3D Gaussian Splatting for One-shot Hand Avatars [[PDF](https://arxiv.org/abs/2410.08840),[Page](https://github.com/XuanHuang0/GuassianHand)]\n\n[arxiv 2024.11] UniHands: Unifying Various Wild-Collected Keypoints for Personalized Hand Reconstruction  [[PDF](https://arxiv.org/abs/2411.11845),[Page]()]\n\n[arxiv 2024.12]  FoundHand: Large-Scale Domain-Specific Learning for Controllable Hand Image Generation [[PDF](https://arxiv.org/abs/2412.02690)]\n\n[arxiv 2024.12]  HandOS: 3D Hand Reconstruction in One Stage [[PDF](https://arxiv.org/abs/2412.01537),[Page](https://idea-research.github.io/HandOSweb/)] \n\n[arxiv 2024.12] GigaHands: A Massive Annotated Dataset of Bimanual Hand Activities  [[PDF](https://arxiv.org/abs/2412.04244),[Page](https://ivl.cs.brown.edu/research/gigahands.html)] \n\n[arxiv 2024.12]  Dyn-HaMR: Recovering 4D Interacting Hand Motion from a Dynamic Camera [[PDF](https://arxiv.org/abs/),[Page](https://dyn-hamr.github.io/)] ![Code](https://img.shields.io/github/stars/ZhengdiYu/Dyn-HaMR?style=social&label=Star)\n\n[arxiv 2025.01]  Predicting 4D Hand Trajectory from Monocular Videos [[PDF](https://arxiv.org/abs/2501.08329),[Page](https://judyye.github.io/4dhands)]\n\n[arxiv 2025.04]  Direction-Aware Hybrid Representation Learning for 3D Hand Pose and Shape Estimation [[PDF](https://arxiv.org/abs/2504.01298)]\n\n[arxiv 2025.10] TOUCH: Text-gUided Controllable Generation of Free-Form Hand-Object Interactions  [[PDF](https://arxiv.org/abs/2510.14874),[Page](https://guangyid.github.io/hoi123touch/)] \n\n\n[arxiv 2026.03]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)\n\n\n# ego\n[arxiv 2025.04]  The Invisible EgoHand: 3D Hand Forecasting through EgoBody Pose Estimation [[PDF](https://arxiv.org/abs/2504.08654),[Page](https://masashi-hatano.github.io/EgoH4/)] ![Code](https://img.shields.io/github/stars/masashi-hatano/EgoH4?style=social&label=Star)\n\n[arxiv 2026.03]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)\n\n\n# LLM \n[arxiv 2025.12] M.I.O: Towards Interactive Intelligence for Digital Humans  [[PDF](https://arxiv.org/abs/2512.13674),[Page](https://shandaai.github.io/project_mio_page/)] \n\n[arxiv 2026.03]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)\n\n\n## Interaction \n[arxiv 2024.05]  Scaling Up Dynamic Human-Scene Interaction Modeling [[PDF](https://arxiv.org/abs/2403.08629),[Page](https://jnnan.github.io/trumans/)] ![Code](https://img.shields.io/github/stars/jnnan/trumans_utils?style=social&label=Star)\n\n[arxiv 2024.06]  Introducing HOT3D: An Egocentric Dataset for 3D Hand and Object Tracking [[PDF](https://arxiv.org/pdf/2406.09598),[Page](https://facebookresearch.github.io/hot3d/)] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)\n\n[arxiv 2024.10]  Sitcom-Crafter: A Plot-Driven Human Motion Generation System in 3D Scenes [[PDF](https://arxiv.org/abs/2410.10790),[Page](https://windvchen.github.io/Sitcom-Crafter/)]\n\n[arxiv 2024.09] DreamHOI: Subject-Driven Generation of 3D Human-Object Interactions with Diffusion Priors[[PDF](https://arxiv.org/abs/2409.08278),[Page](https://dreamhoi.github.io/)]\n\n[arxiv 2024.10] GraspDiffusion: Synthesizing Realistic Whole-body Hand-Object Interaction [[PDF](https://arxiv.org/abs/2410.13911),[Page]()]\n\n[arxiv 2024.11] AnchorCrafter: Animate CyberAnchors Saling Your Products via Human-Object Interacting Video Generation  [[PDF](https://arxiv.org/abs/2411.17383),[Page](https://cangcz.github.io/Anchor-Crafter/)] ![Code](https://img.shields.io/github/stars/cangcz/AnchorCrafter?style=social&label=Star)\n\n[arxiv 2024.12] HOT3D: Hand and Object Tracking in 3D from Egocentric Multi-View Videos  [[PDF](https://arxiv.org/abs/2411.19167),[Page](https://facebookresearch.github.io/hot3d/)] \n\n[arxiv 2024.12]  OOD-HOI: Text-Driven 3D Whole-Body Human-Object Interactions Generation Beyond Training Domains [[PDF](https://nickk0212.github.io/ood-hoi/#),[Page](https://nickk0212.github.io/ood-hoi/)] \n\n[arxiv 2024.12]  FIction: 4D Future Interaction Prediction from Video [[PDF](https://arxiv.org/abs/2408.00672),[Page](https://vision.cs.utexas.edu/projects/FIction/)] ![Code](https://img.shields.io/github/stars/thechargedneutron/FIction?style=social&label=Star)\n\n[arxiv 2024.12]  TriDi: Trilateral Diffusion of 3D Humans, Objects and Interactions [[PDF](https://arxiv.org/abs/),[Page](https://virtualhumans.mpi-inf.mpg.de/tridi/)] ![Code](https://img.shields.io/github/stars/ptrvilya/tridi?style=social&label=Star)\n\n[arxiv 2024.12] ContextHOI: Spatial Context Learning for Human-Object Interaction Detection  [[PDF](https://arxiv.org/abs/2412.09050)]\n\n[arxiv 2025.01] DiffGrasp: Whole-Body Grasping Synthesis Guided by Object Motion Using a Diffusion Model  [[PDF](https://www.arxiv.org/abs/2412.20657),[Page](https://iscas3dv.github.io/DiffGrasp/)] ![Code](https://img.shields.io/github/stars/iscas3dv/DiffGrasp?style=social&label=Star)\n\n[arxiv 2025.01]  DAViD: Modeling Dynamic Affordance of 3D Objects using Pre-trained Video Diffusion Models [[PDF](https://arxiv.org/abs/2501.08333),[Page](https://snuvclab.github.io/david/)] ![Code](https://img.shields.io/github/stars/snuvclab/david?style=social&label=Star)\n\n[arxiv 2025.02]  InterMimic: Towards Universal Whole-Body Control for Physics-Based Human-Object Interactions [[PDF](https://arxiv.org/pdf/2502.20390),[Page](https://sirui-xu.github.io/InterMimic/)] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)\n\n[arxiv 2025.02]  Ready-to-React: Online Reaction Policy for Two-Character Interaction Generation [[PDF](https://arxiv.org/abs/2502.20370),[Page](https://zju3dv.github.io/ready_to_react/)] ![Code](https://img.shields.io/github/stars/zju3dv/ready_to_react?style=social&label=Star)\n\n[arxiv 2025.03]  Towards Semantic 3D Hand-Object Interaction Generation via Functional Text Guidance [[PDF](https://arxiv.org/pdf/2502.20805)]\n\n[arxiv 2025.03] 3D Human Interaction Generation: A Survey [[PDF](https://arxiv.org/abs/2503.13120)]\n\n[arxiv 2025.03] A Survey on Human Interaction Motion Generation  [[PDF](https://arxiv.org/abs/2503.12763)]\n\n[arxiv 2025.03] Auto-Regressive Diffusion for Generating 3D Human-Object Interactions  [[PDF](https://arxiv.org/pdf/2503.16801),[Page](https://github.com/gengzichen/ARDHOI)] ![Code](https://img.shields.io/github/stars/gengzichen/ARDHOI?style=social&label=Star)\n\n[arxiv 2025.03]  Guiding Human-Object Interactions with Rich Geometry and Relations [[PDF](https://arxiv.org/abs/2503.20172),[Page](https://lalalfhdh.github.io/rog_page/)]\n\n[arxiv 2025.04]  SIGHT: Single-Image Conditioned Generation of Hand Trajectories for Hand-Object Interaction [[PDF](https://arxiv.org/abs/2503.22869)]\n\n[arxiv 2025.04]  MixerMDM: Learnable Composition of Human Motion Diffusion Models [[PDF](https://arxiv.org/abs/2504.01019),[Page](https://pabloruizponce.com/papers/MixerMDM)] ![Code](https://img.shields.io/github/stars/pabloruizponce/MixerMDM?style=social&label=Star)\n\n[arxiv 2025.04]  InteractVLM: 3D Interaction Reasoning from 2D Foundational Models [[PDF](https://arxiv.org/abs/2504.05303),[Page](https://interactvlm.is.tue.mpg.de/)] \n\n[arxiv 2025.04]  How Do I Do That? Synthesizing 3D Hand Motion and Contacts for Everyday Interactions [[PDF](https://arxiv.org/abs/2504.12284),[Page](https://ap229997.github.io/projects/latentact/)] ![Code](https://img.shields.io/github/stars/ap229997/latentact?style=social&label=Star)\n\n[arxiv 2025.04]  InterAnimate: Taming Region-aware Diffusion Model for Realistic Human Interaction Animation [[PDF](https://arxiv.org/pdf/2504.10905)]\n\n[arxiv 2025.04] HUMOTO: A 4D Dataset of Mocap Human Object Interactions  [[PDF](https://arxiv.org/abs/2504.10414),[Page](https://jiaxin-lu.github.io/humoto/)] \n\n[arxiv 2025.05]  HOIGaze: Gaze Estimation During Hand-Object Interactions in Extended Reality Exploiting Eye-Hand-Head Coordination [[PDF](https://arxiv.org/abs/2504.19828),[Page](https://zhiminghu.net/hu25_hoigaze.html)] ![Code](https://img.shields.io/github/stars/CraneHzm/HOIGaze?style=social&label=Star)\n\n[arxiv 2025.05]  MEgoHand: Multimodal Egocentric Hand-Object Interaction Motion Generation [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)\n\n[arxiv 2025.05]  Large-Scale Multi-Character Interaction Synthesis [[PDF](https://arxiv.org/abs/2505.14087)]\n\n[arxiv 2025.05]  Multi-Person Interaction Generation from Two-Person Motion Priors [[PDF](https://arxiv.org/pdf/2505.17860)]\n\n[arxiv 2025.06] HOIDiNi: Human-Object Interaction through Diffusion Noise Optimization  [[PDF](https://arxiv.org/pdf/2506.15625),[Page](https://hoidini.github.io/)] ![Code](https://img.shields.io/github/stars/hoidini/HOIDiNi?style=social&label=Star)\n\n[arxiv 2025.06]  GenHOI: Generalizing Text-driven 4D Human-Object Interaction Synthesis for Unseen Objects [[PDF](https://arxiv.org/abs/2506.15483),[Page](https://etach-qs.github.io/GenHOI_project/)] ![Code](https://img.shields.io/github/stars/etach-qs/GenHOI?style=social&label=Star)\n\n[arxiv 2025.06] DuetGen: Music Driven Two-Person Dance Generation via Hierarchical Masked Modeling  [[PDF](https://arxiv.org/abs/2506.18680),[Page](https://github.com/anindita127/DuetGen)] ![Code](https://img.shields.io/github/stars/anindita127/DuetGen?style=social&label=Star)\n\n[arxiv 2025.07]  HOI-Dyn: Learning Interaction Dynamics for Human-Object Motion Diffusion [[PDF](https://arxiv.org/pdf/2507.01737)]\n\n[arxiv 2025.08]  Perceiving and Acting in First-Person: A Dataset and Benchmark for Egocentric Human-Object-Human Interactions [[PDF](https://arxiv.org/pdf/2508.04681),[Page](https://liangxuy.github.io/InterVLA/)] ![Code](https://img.shields.io/github/stars/liangxuy/InterVLA?style=social&label=Star)\n\n[arxiv 2025.08]  CoopDiff: Anticipating 3D Human-object Interactions via Contact-consistent Decoupled Diffusion [[PDF](https://arxiv.org/abs/2508.07162)]\n\n[arxiv 2025.08]  GaussianArt: Unified Modeling of Geometry and Motion for Articulated Objects [[PDF](https://arxiv.org/abs/2508.14891),[Page](https://sainingzhang.github.io/project/gaussianart/)] \n\n[arxiv 2025.08]  ECHO: Ego-Centric modeling of Human-Object interactions [[PDF](https://arxiv.org/pdf/2508.21556)]\n\n[arxiv 2025.09] InterAct: A Large-Scale Dataset of Dynamic, Expressive and Interactive Activities between Two People in Daily Scenarios  [[PDF](https://arxiv.org/abs/2509.05747),[Page](https://hku-cg.github.io/interact/)] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)\n\n[arxiv 2025.09] ScoreHOI: Physically Plausible Reconstruction of Human-Object Interaction via Score-Guided Diffusion  [[PDF](https://arxiv.org/abs/2509.07920),[Page](https://github.com/RammusLeo/ScoreHOI)] ![Code](https://img.shields.io/github/stars/RammusLeo/ScoreHOI?style=social&label=Star)\n\n[arxiv 2025.09]  InterAct: Advancing Large-Scale Versatile 3D Human-Object Interaction Generation [[PDF](https://arxiv.org/pdf/2509.09555),[Page](https://sirui-xu.github.io/InterAct/)] ![Code](https://img.shields.io/github/stars/wzyabcas/InterAct?style=social&label=Star)\n\n[arxiv 2025.09]  OnlineHOI: Towards Online Human-Object Interaction Generation and Perception [[PDF](https://arxiv.org/abs/2509.12250)]\n\n[arxiv 2025.10] Text2Interact: High-Fidelity and Diverse Text-to-Two-Person Interaction Generation  [[PDF](https://arxiv.org/abs/2510.06504)]\n\n[arxiv 2025.10]  Ponimator: Unfolding Interactive Pose for Versatile Human-human Interaction Animation [[PDF](https://arxiv.org/abs/2510.14976),[Page](https://stevenlsw.github.io/ponimator/)] ![Code](https://img.shields.io/github/stars/stevenlsw/ponimator?style=social&label=Star)\n\n[arxiv 2025.11]  SyncMV4D: Synchronized Multi-view Joint Diffusion of Appearance and Motion for Hand-Object Interaction Synthesis [[PDF](https://arxiv.org/abs/2511.19319),[Page](https://droliven.github.io/SyncMV4D)] ![Code](https://img.shields.io/github/stars/Droliven/syncmv4d_code?style=social&label=Star)\n\n[arxiv 2026.01]  InterPrior: Scaling Generative Control for Physics-Based Human-Object Interactions [[PDF](https://arxiv.org/abs/2602.06035),[Page](https://sirui-xu.github.io/InterPrior/)] \n\n[arxiv 2026.03] PAM: A Pose-Appearance-Motion Engine for Sim-to-Real HOI Video Generation  [[PDF](https://arxiv.org/abs/2603.22193)] ![Code](https://img.shields.io/github/stars/GasaiYU/PAM?style=social&label=Star)\n\n\n[arxiv 2026.03]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)\n\n\n## Interaction Detection \n\n[arxiv 2024.12]  Orchestrating the Symphony of Prompt Distribution Learning for Human-Object Interaction Detection [[PDF](https://arxiv.org/abs/2412.08506)]\n\n[arxiv 2024.12]  HandsOnVLM: Vision-Language Models for Hand-Object Interaction Prediction [[PDF](https://arxiv.org/abs/2412.13187)]\n\n[arxiv 2025.01]  Interacted Object Grounding in Spatio-Temporal Human-Object Interactions [[PDF](https://arxiv.org/abs/2412.19542),[Page](https://github.com/DirtyHarryLYL/HAKE-AVA)] ![Code](https://img.shields.io/github/stars/DirtyHarryLYL/HAKE-AVA?style=social&label=Star)\n\n[arxiv 2025.03]  UniHOPE: A Unified Approach for Hand-Only and Hand-Object Pose Estimation [[PDF](https://arxiv.org/pdf/2503.13303),[Page](https://github.com/JoyboyWang/UniHOPE_Pytorch)] ![Code](https://img.shields.io/github/stars/JoyboyWang/UniHOPE_Pytorch?style=social&label=Star)\n\n[arxiv 2025.03] Reconstructing In-the-Wild Open-Vocabulary Human-Object Interactions [[PDF](https://arxiv.org/abs/2503.15898),[Page](https://wenboran2002.github.io/3dhoi/)] ![Code](https://img.shields.io/github/stars/wenboran2002/open-3dhoi?style=social&label=Star)\n\n[arxiv 2025.08] HOID-R1: Reinforcement Learning for Open-World Human-Object Interaction Detection Reasoning with Multimodal Large Language Model  [[PDF](https://arxiv.org/abs/2508.11350)]\n\n[arxiv 2026.03]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)\n\n\n[arxiv 2026.03]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)\n\n\n## environment \n[arxiv 2024.10] DepthSplat：Connecting Gaussian Splatting and Depth  [[PDF](https://arxiv.org/abs/2410.13862),[Page](https://haofeixu.github.io/depthsplat/)]\n\n[arxiv 2024.10] Long-LRM: Long-sequence Large Reconstruction Model for Wide-coverage Gaussian Splats  [[PDF](https://arxiv.org/abs/2410.12781),[Page](https://arthurhero.github.io/projects/llrm/)]\n\n[arxiv 2024.12] SCENIC: Scene-aware Semantic Navigation with Instruction-guided Control  [[PDF](https://arxiv.org/abs/2412.15664),[Page](https://virtualhumans.mpi-inf.mpg.de/scenic/)] \n\n[arxiv 2024.12]  ZeroHSI: Zero-Shot 4D Human-Scene Interaction [[PDF](https://arxiv.org/abs/2412.18600),[Page](https://awfuact.github.io/zerohsi/)] \n\n[arxiv 2025.03] SceneMI: Motion In-betweening for Modeling Human-Scene Interactions  [[PDF](https://arxiv.org/pdf/2503.16289),[Page](https://inwoohwang.me/SceneMI/)] \n\n[arxiv 2025.06]  GenHSI: Controllable Generation of Human-Scene Interaction Videos [[PDF](https://arxiv.org/pdf/2506.19840),[Page](https://kunkun0w0.github.io/project/GenHSI/)] ![Code](https://img.shields.io/github/stars/kunkun0w0/GenHSI?style=social&label=Star)\n\n[arxiv 2025.09]  FantasyHSI: Video-Generation-Centric 4D Human Synthesis In Any Scene through A Graph-based Multi-Agent Framework [[PDF](https://arxiv.org/abs/2509.01232),[Page](https://fantasy-amap.github.io/fantasy-hsi/)] ![Code](https://img.shields.io/github/stars/Fantasy-AMAP/fantasy-hsi?style=social&label=Star)\n\n[arxiv 2025.10]  Human3R: Everyone Everywhere All at Once [[PDF](https://arxiv.org/abs/2510.06219),[Page](https://fanegg.github.io/Human3R)] ![Code](https://img.shields.io/github/stars/fanegg/Human3R?style=social&label=Star)\n\n[arxiv 2026.01] Dynamic Worlds, Dynamic Humans: Generating Virtual Human-Scene Interaction Motion in Dynamic Scenes  [[PDF](https://arxiv.org/abs/2601.19484)]\n\n\n[arxiv 2026.03]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)\n\n## VLA\n[arxiv 2025.11]  HMVLM: Human Motion-Vision-Lanuage Model via MoE LoRA [[PDF](https://arxiv.org/abs/2511.01463)]\n\n[arxiv 2026.02] VLA-JEPA: Enhancing Vision-Language-Action Model with Latent World Model  [[PDF](https://arxiv.org/abs/2602.10098),[Page](https://github.com/ginwind/VLA-JEPA/)] ![Code](https://img.shields.io/github/stars/ginwind/VLA-JEPA/?style=social&label=Star)\n\n\n[arxiv 2026.03]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)\n\n\n## Capture \n\n[arxiv 2024.11] FreeCap: Hybrid Calibration-Free Motion Capture in Open Environments  [[PDF](https://arxiv.org/abs/2411.04469)]\n\n[arxiv 2025.04]  EMO-X: Efficient Multi-Person Pose and Shape Estimation in One-Stage [[PDF](https://arxiv.org/abs/2504.08718)]\n\n[arxiv 2025.04] CoMotion: Concurrent Multi-person 3D Motion  [[PDF](https://arxiv.org/abs/2504.12186),[Page](https://github.com/apple/ml-comotion)] ![Code](https://img.shields.io/github/stars/apple/ml-comotion?style=social&label=Star)\n\n\n[arxiv 2026.03]   [[PDF](),[Page]()] ![Code](https://img.shields.io/github/stars/xxx?style=social&label=Star)\n"
  }
]