Multi-View Mesh Reconstruction with Neural Deferred Shading

*Equal contribution
1Fraunhofer Heinrich-Hertz-Institute, 2Technical University of Berlin, 3Queen Mary University of London, 4Humboldt University of Berlin
CVPR 2022

We recover geometry from calibrated input images using a differentiable mesh renderer with neural shading.


We propose an analysis-by-synthesis method for fast multi-view 3D reconstruction of opaque objects with arbitrary materials and illumination.

State-of-the-art methods use both neural surface representations and neural rendering. While flexible, neural surface representations are a significant bottleneck in optimization runtime. Instead, we represent surfaces as triangle meshes and build a differentiable rendering pipeline around triangle rasterization and neural shading. The renderer is used in a gradient descent optimization where both a triangle mesh and a neural shader are jointly optimized to reproduce the multi-view images.

We evaluate our method on a public 3D reconstruction dataset and show that it can match the reconstruction accuracy of traditional baselines and neural approaches while surpassing them in optimization runtime. Additionally, we investigate the shader and find that it learns an interpretable representation of appearance, enabling applications such as 3D material editing.


Related Links

An (incomprehensive) list of excellent concurrent work.

Extracting Triangular 3D Models, Materials, and Lighting From Images uses differentiable meshing to overcome the fixed topology limitation when optimizing triangle meshes.

IRON: Inverse Rendering by Optimizing Neural SDFs and Materials from Photometric Images uses neural materials and a simplified rendering equation for shading.


      author    = {Worchel, Markus and Diaz, Rodrigo and Hu, Weiwen and Schreer, Oliver and Feldmann, Ingo and Eisert, Peter},
      title     = {Multi-View Mesh Reconstruction with Neural Deferred Shading},
      booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
      month     = {June},
      year      = {2022}


This work is part of the INVICTUS project that has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 952147.