Towards Physical Understanding in Video Generation: A 3D Point Regularization Approach

1University of California, Los Angeles, 2Snap Inc.
*Work done during internship at Snap

Abstract

We present a novel video generation framework that integrates 3-dimensional geometry and dynamic awareness. To achieve this, we augment 2D videos with 3D point trajectories and align them in pixel space. The resulting 3D-aware video dataset, PointVid, is then used to fine-tune a latent diffusion model, enabling it to track 2D objects with 3D Cartesian coordinates. Building on this, we regularize the shape and motion of objects in the video to eliminate undesired artifacts, e.g., nonphysical deformation. Consequently, we enhance the quality of generated RGB videos and alleviate common issues like object morphing, which are prevalent in current video models due to a lack of shape awareness. With our 3D augmentation and regularization, our model is capable of handling contact-rich scenarios such as task-oriented videos. These videos involve complex interactions of solids, where 3D information is essential for perceiving deformation and contact. Furthermore, our model improves the overall quality of video generation by promoting the 3D consistency of moving objects and reducing abrupt changes in shape and motion.

Pipeline

We sample video-point pairs, concatenate them in channel dimensions and used to train a UNet. In addition to standard condition and latent cross attention, we further add cross attention between video and point in corresponding channels for a better alignment between the two modalities. Furthermore, the 3D information from the points is utilized to regularize the RGB video generation by applying a misalignment penalty to the video diffusion process.

BibTeX

@article{chen2025towards,
      title={Towards Physical Understanding in Video Generation: A 3D Point Regularization Approach},
      author={Yunuo Chen and Junli Cao and Anil Kag and Vidit Goel and Sergei Korolev and Chenfanfu Jiang and Sergey Tulyakov and Jian Ren},
      journal={arXiv preprint arXiv:2502.03639},
      year={2025},
}