HyperHuman: Hyper-Realistic Human Generation with Latent Structural Diffusion
Xian Liu1,2    Jian Ren1    Aliaksandr Siarohin1    Ivan Skorokhodov1    Yanyu Li1   
Dahua Lin2    Xihui Liu3    Ziwei Liu4    Sergey Tulyakov1
1Snap Inc.    2CUHK    3HKU    4NTU
Short Demo Video (3min)
We present a short demo video, mostly with visualization results and a very quick overview of our framework.
Long Demo Video (10min)
We present a long demo video with detailed elaborations on the motivations and framework designs.
Abstract
Despite significant advances in large-scale text-to-image models, achieving hyper-realistic human image generation remains a desirable yet unsolved task. Existing models like Stable Diffusion and DALLĀ·E 2 tend to generate human images with incoherent parts or unnatural poses. To tackle these challenges, our key insight is that human image is inherently structural over multiple granularities, from the coarse-level body skeleton to the fine-grained spatial geometry. Therefore, capturing such correlations between the explicit appearance and latent structure in one model is essential to generate coherent and natural human images. To this end, we propose a unified framework, HyperHuman, that generates in-the-wild human images of high realism and diverse layouts. Specifically, 1) we first build a large-scale human-centric dataset, named HumanVerse, which consists of 340M images with comprehensive annotations like human pose, depth, and surface-normal. 2) Next, we propose a Latent Structural Diffusion Model that simultaneously denoises the depth and surface-normal along with the synthesized RGB image. Our model enforces the joint learning of image appearance, spatial relationship, and geometry in a unified network, where each branch in the model complements to each other with both structural awareness and textural richness. 3) Finally, to further boost the visual quality, we propose a Structure-Guided Refiner to compose the predicted conditions for more detailed generation of higher resolution. Extensive experiments demonstrate that our framework yields the state-of-the-art performance, generating hyper-realistic human images under diverse scenarios.
Top: The proposed HyperHuman simultaneously generates the coarse RGB, depth, normal, and high-resolution images conditioned on text and skeleton. Both photo-realistic images and stylistic renderings can be created. Bottom: We compare with recent T2I models, showing better realism, quality, diversity, and controllability. Note that in each 2x2 grid (left), the upper-left is input skeleton, while the others are jointly denoised normal, depth, and coarse RGB of 512x512. With full model, we synthesize images up to 1024x1024 (right).
Framework Overview
Overview of HyperHuman Framework. In Latent Structural Diffusion Model (purple), the image x, depth d, and surface-normal n are jointly denoised conditioning on caption c and pose skeleton p. In Structure-Guided Refiner (blue), we compose the predicted conditions for higher-resolution generation. Note that the grey images refer to randomly dropout conditions for more robust training.
Quantitative Results
Zero-Shot Evaluation on MS-COCO 2014 Validation Human. We compare our model with recent SOTA general T2I models (Stable Diffusion v1.5, v2.0, v2.1; SDXL; DeepFloyd-IF) and controllable methods (ControlNet; T2I-Adapter; HumanSD). Note that SDXL generates artistic style in 512x512, and IF only creates fixed-size images, we first generate 1024x1024 results, then resize back to 512x512 for these two methods. We bold the best and underline the second results for clarity. Our improvements over the second method are shown in red.

Evaluation Curves on MS-COCO 2014 Validation Human Subset. We show FID-CLIP (left) and FIDCLIP-CLIP (right) curves with CFG scale ranging from 4.0 to 20.0 for all methods.

User Preference Comparisons. We report the ratio of users prefer our model to baselines.
More Comparisons (1024x1024)
BibTeX
@article{liu2023hyperhuman,
    title={HyperHuman: Hyper-Realistic Human Generation with Latent Structural Diffusion},
    author={Liu, Xian and Ren, Jian and Siarohin, Aliaksandr and Skorokhodov, Ivan and Li, Yanyu and Lin, Dahua and Liu, Xihui and Liu, Ziwei and Tulyakov, Sergey},
    journal={arXiv preprint arXiv:2310.08579},
    year={2023}
}