HoloDiffusion: Training a 3D Diffusion Model using 2D Images

*indicates equal contribution

1University College London, Meta-AI
(CVPR 2023)

holo diffusion teaser figure

We present HoloDiffusion as the first 3D-aware generative diffusion model that produces 3D-consistent images and is trained with only posed image supervision. Here we show different generations, rendered from a sampling of views, from models trained on different classes from the challenging CO3D dataset. Some of our randomly generated samples are presented below.

Abstract

Diffusion models have emerged as the best approach for generative modeling of 2D images. Part of their success is due to the possibility of training them on millions if not billions of images with a stable learning objective. However, extending these models to 3D remains difficult for two reasons. First, finding a large quantity of 3D training data is much harder than for 2D images. Second, while it is conceptually trivial to extend the models to operate on 3D rather than 2D grids, the associated cubic growth in memory and compute complexity makes this unfeasible. We address the first challenge by introducing a new diffusion setup that can be trained, end-to-end, with only posed 2D images for supervision, and the second challenge by proposing an image formation model that decouples model memory from spatial memory. We evaluate our method on real-world data, using the CO3D dataset which has not been used to train 3D generative models before. We show that our diffusion models are scalable, train robustly, and are competitive in terms of sample quality and fidelity to existing approaches for 3D generative modeling.


Method

Holo-diffusion conceptual illustration

Method Overview. Our HoloDiffusion that takes as input video frames for category-specific videos \( \lbrace s^i \rbrace \) and produces a diffusion-based generative model \( \mathcal{D}_\theta \). The model is trained with only posed image supervision \(\lbrace (I_j^i , P_j^i )\rbrace\), without access to 3D ground-truth. Once trained, the model can generate view-consistent results from novel camera locations. Please refer to Sec. 3 of the paper for details.


Quantitative Scores

Holo-diffusion results

Quantitative evaluation. FID and KID on 4 classes of CO3Dv2 comparing our HoloDiffusion with the baselines piGAN, EG3D, GET3D, and the non-bootstrapped version of our HoloDiffusion. The column “VP” denotes whether renders of a method are 3D view-consistent or not.


More Qualitative Results

Samples with colour and geometry. Samples drawn from the HoloDiffusion model trained on Co3Dv2 are shown with the colour render (left) and the shaded-depth renders (right). The geometries are mostly clean, but include the base plane due to the real-object setting of Co3Dv2.


Bibtex


    @inproceedings{karnewar2023holodiffusion,
      title={HoloDiffusion: Training a {3D} Diffusion Model using {2D} Images},
      author={Karnewar, Animesh and Vedaldi, Andrea and Novotny, David and Mitra, Niloy},
      booktitle={Proceedings of the IEEE/CVF conference on computer vision and pattern recognition},
      year={2023}
    }
    

Acknowledgements

PRIME-EU logo

Animesh and Niloy were partially funded by the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No. 956585. This research has also been supported by MetaAI and the UCL AI Centre. Finally, Animesh thanks Alexia Jolicoeur-Martineau for the the helpful and insightful guidance on diffusion models.