Scalable Ranked Preference Optimization for Text-to-Image Generation

1University of Tübingen 2Tübingen AI Center, Helmholtz Munich, Munich Centre for Machine Learning, TU Munich 3Snap Inc.

arXiv 2024


Results Image

Abstract

Direct Preference Optimization (DPO) has emerged as a powerful approach to align text-to-image (T2I) models with human feedback. Unfortunately, successful application of DPO to T2I models requires a huge amount of resources to collect and label large-scale datasets, e.g., millions of generated paired images annotated with human preferences. In addition, these human preference datasets can get outdated quickly as the rapid improvements of T2I models lead to higher quality images. In this work, we investigate a scalable approach for collecting large-scale and fully synthetic datasets for DPO training. Specifically, the preferences for paired images are generated using a pre-trained reward function, eliminating the need for involving humans in the annotation process, greatly improving the dataset collection efficiency. Moreover, we demonstrate that such datasets allow averaging predictions across multiple models and collecting ranked preferences as opposed to pairwise preferences. Furthermore, we introduce RankDPO to enhance DPO-based methods using the ranking feedback. Applying RankDPO on SDXL and SD3-Medium models with our synthetically generated preference dataset ``Syn-Pic'' improves both prompt-following (on benchmarks like T2I-Compbench, GenEval, and DPG-Bench) and visual quality (through user studies). This pipeline presents a practical and scalable solution to develop better preference datasets to enhance the performance of text-to-image models.


Method

We bootstrap a preference optimizaiton dataset from scratch using off-the-shelf text-to-image generators and reward models. We generate images for the prompts from Pick-a-Picv2 using SDXL, SD3-Medium, Pixart-Sigma, and Stable Cascade. From this, we construct rankings by aggregating scores from several reward models (e.g. ImageReward, PickScore, HPSv2.1). After this, we introduce RankDPO, a strategy that applied weights derived from DCG to the Diffusion DPO process, leading to significantly enhanced prompt following and visual quality in the resulting models.

  architecture

Qualitative comparisons to preference optimization Methods

Results Image
Results Image

Qualitative comparisons to SDXL and SD3-Medium

Results Image
Results Image

Quantitative results on GenEval and T2I-Compbench

Results Image

Quantitative results on DPG-Bench

Results Image

Citation

If you find our work useful, do consider citing

@article{karthik2024scalable,
  author = {Karthik, Shyamgopal and Coskun, Huseyin and Akata, Zeynep and Tulyakov, Sergey and Ren, Jian and Kag, Anil},
  title = {Scalable Ranked Preference Optimization for Text-to-Image Generation},
  journal = {arXiv preprint },
  year = {2024},
}