Existing text-to-image (T2I) diffusion models face several limitations, including large model sizes, slow runtime, and low-quality generation on mobile devices. This paper aims to address all of these challenges by developing an extremely small and fast T2I model that generates high-resolution and high-quality images on mobile platforms. We propose several techniques to achieve this goal. First, we systematically examine the design choices of the network architecture to reduce model parameters and latency, while ensuring high-quality generation. Second, to further improve generation quality, we employ cross-architecture knowledge distillation from a much larger model, using a multi-level approach to guide the training of our model from scratch. Third, we enable a few-step generation by integrating adversarial guidance with knowledge distillation. For the first time, our model SnapGen, demonstrates the generation of 10242 px images on a mobile device around 1.4 seconds. On ImageNet-1K, our model, with only 372M parameters, achieves an FID of 2.06 for 2562 px generation. On T2I benchmarks (i.e., GenEval and DPG-Bench), our model with merely 379M parameters, surpasses large-scale models with billions of parameters at a significantly smaller size (e.g., 7× smaller than SDXL, 14× smaller than IF-XL).
We conduct an in-depth examination of network architectures, including the denoising UNet and Autoencoder (AE), to obtain optimal trade-off between latency and performance. Unlike prior works that optimize and compress pre-trained diffusion models, we directly focus on macro- and micro-level design choices to achieve a novel architecture that greatly reduces model size and computational complexity, while preserving high-quality generation.
We introduce several improvements to train a compact T2I model from scratch. We propose a multi-level knowledge distillation with a timestep-aware scaling that combines multiple training objectives. We perform step distillation on our model by combining the adversarial training along with the knowledge distillation using a few-step teacher model.
Human evaluation vs. SDXL, SD3-Medium and SD3.5-Large:
Comparison with existing T2I models across various benchmarks:
Few Step Visualization:
More Visualization Comparison:
@article{hu2024snapgen,
title={SnapGen: Taming High-Resolution Text-to-Image Models for Mobile Devices with Efficient Architectures and Training},
author={Dongting Hu, Jierun Chen, Xijie Huang, Huseyin Coskun, Arpit Sahni, Aarush Gupta, Anujraaj Goyal, Dishani Lahiri, Rajesh Singh, Yerlan Idelbayev, Junli Cao, Yanyu Li, Kwang-Ting Cheng, S.-H. Chan, Mingming Gong, Sergey Tulyakov, Anil Kag, Yanwu Xu, Jian Ren},
journal={arXiv preprint arXiv:24},
year={2024}
}