VD3D: Taming Large Video Diffusion Transformers for 3D Camera Control

1University of Toronto 2Vector Institute 3Snap Inc. 4SFU

arXiv 2024



Abstract

Modern text-to-video synthesis models demonstrate coherent, photorealistic generation of complex videos from a text description. However, most existing models lack fine-grained control over camera movement, which is critical for downstream applications related to content creation, visual effects, and 3D vision. Recently, new methods demonstrate the ability to generate videos with controllable camera poses-these techniques leverage pre-trained U-Net-based diffusion models that explicitly disentangle spatial and temporal generation. Still, no existing approach enables camera control for new, transformer-based video diffusion models that process spatial and temporal information jointly. Here, we propose to tame video transformers for 3D camera control using a ControlNet-like conditioning mechanism that incorporates spatiotemporal camera embeddings based on Plucker coordinates. The approach demonstrates state-of-the-art performance for controllable video generation after fine-tuning on the RealEstate10K dataset. To the best of our knowledge, our work is the first to enable camera control for transformer-based video diffusion models.


Method

We adapt the Snap Video FIT architecture to incorporate camera control. We take as input the noisy input video, camera extrinsics, and camera intrinsics for each video frame. Using the camera parameters, we compute the Plücker coordinates for each pixel within the video frames. Both the input video and Plücker coordinate frames are converted to patch tokens, and we condition the video patch tokens using a mechanism similar to ControlNet. Then, the model estimates the denoised video by recurrent application of FIT blocks. Each block reads information from the patch tokens into a small set of latent tokens on which computation is performed. The results are written to the patch tokens in an iterative denoising diffusion process.

  architecture

3D Camera Control for Dynamic Scenes

Camera Input

Reference Trajectory Video

Camera Controlled Generation

A trio of fashionable, beret-clad cats sips coffee at a chic Parisian cafe
An astronaut cooking with a pan and fire in the kitchen
A cat sits at a grand piano, its paws gracefully tapping the keys
A huge dinosaur skeleton is walking in a golden wheat field on a bright sunny day
A man with a skull face in flames walking around Piccadilly circus
More Results

3D Camera Control for Real Image-to-Multiview Scenes

A bedroom with a bed, lamps and a window
A house sitting in the middle of a grassy field

Citation

@article{bahmani2024vd3d,
  author = {Bahmani, Sherwin and Skorokhodov, Ivan and Siarohin, Aliaksandr and Menapace, Willi and Qian, Guocheng and Vasilkovsky, Michael and Lee, Hsin-Ying and Wang, Chaoyang and Zou, Jiaxu and Tagliasacchi, Andrea and Lindell, David B. and Tulyakov, Sergey},
  title = {VD3D: Taming Large Video Diffusion Transformers for 3D Camera Control},
  journal = {arXiv preprint arXiv:2407.12781},
  year = {2024},
}