MMaDA-Parallel: Multimodal Large Diffusion Language Models for Thinking-Aware Editing and Generation
Paper
โข
2511.09611
โข
Published
โข
69
We introduce Parallel Multimodal Large Diffusion Language Models for Thinking-Aware Editing and Generation (MMaDA-Parallel), a parallel multimodal diffusion framework that enables continuous, bidirectional interaction between text and images throughout the entire denoising trajectory.
This variant is based on Amused-VQ, trained from Lumina-DiMOO, with better quality and robustness.
@article{tian2025mmadaparallel,
title={MMaDA-Parallel: Multimodal Large Diffusion Language Models for Thinking-Aware Editing and Generation},
author={Tian, Ye and Yang, Ling and Yang, Jiongfan and Wang, Anran and Tian, Yu and Zheng, Jiani and Wang, Haochen and Teng, Zhiyang and Wang, Zhuochen and Wang, Yinjie and Tong, Yunhai and Wang, Mengdi and Li, Xiangtai},
journal={arXiv preprint arXiv:2511.09611},
year={2025}
}