NVS-MonoDepth: Improving Monocular Depth Prediction with Novel View Synthesis

Authors
  • Z. Bauer
  • Z. Li
  • S. Orts-Escolano
  • M. Cazorla
Publication date 2021
Book title 2021 International Conference on 3D Vision
Book subtitle proceedings : 3DV 2021 : virtual conference, 1-3 December 2021
ISBN
  • 9781665426893
ISBN (electronic)
  • 9781665426886
Event 2021 International Conference on 3D Vision
Pages (from-to) 848-858
Publisher Piscataway, NJ: Conference Publishing Services, IEEE Computer Society
Organisations
  • Faculty of Science (FNWI) - Informatics Institute (IVI)
Abstract
Building upon the recent progress in novel view synthesis, we propose its application to improve monocular depth estimation. In particular, we propose a novel training method split in three main steps. First, the prediction results of a monocular depth network are warped to an additional view point. Second, we apply an additional image synthesis network, which corrects and improves the quality of the warped RGB image. The output of this network is required to look as similar as possible to the ground-truth view by minimizing the pixel-wise RGB reconstruction error. Third, we reapply the same monocular depth estimation onto the synthesized second view point and ensure that the depth predictions are consistent with the associated ground truth depth. Experimental results prove that our method achieves state-of-the-art or comparable performance on the KITTI and NYU-Depth-v2 datasets with a lightweight and simple vanilla U-Net architecture.
Document type Conference contribution
Language English
Published at https://doi.org/10.1109/3DV53792.2021.00093
Published at https://www.computer.org/csdl/proceedings-article/3dv/2021/268800a848/1zWEfPv6Jd6
Other links https://www.proceedings.com/62174.html
Permalink to this page
Back