Keywords: Computer Vision
Abstract
Depth estimation from multiple views (stereo vision) or other single-view assumptions (motion, shading, defocus) has been well studied in literature. However, estimating the depth map of a scene from a
single RGB image remains an open problem due to the inherent ambiguity of mapping color/intensities to depth values, i.e. a 2D image could correspond to multiple 3D world scenarios. Within the spectrum of this project, we address this problem using a deep learning approach. We have investigated different Convolutional Neural Network (CNN) architectures and loss functions for optimization of the task at hand. Particular focus is given on in-network upsampling layers with learnable weights, aiming to optimally tackle the problem of high dimensional outputs without an excessive number of parameters. The best performing model encompasses residual learning and delivers state-of-the-art, real-time performance on depth prediction from images or videos of indoor and outdoor scenes. The methods developed within this project can be applicable to several other dense prediction problems as well.
Pictures
|
Figure 1: Results of depth estimation on benchmark datasets.
|
|
|
Figure 2: 3D reconstruction (point cloud) from a single image.
|
|
Team
Contact Person(s)
Working Group
Location
Visit our lab at Garching.
internal project page
Please contact
Iro Laina,
Christian Rupprecht for available student projects within this research project.
Software
The trained models (indoors on NYU Depth v2; outdoors on Make3D) are publicly available and can be downloaded here:
Check out also our
GitHub page, including the code for evaluation and inference.
The CNN-based depth estimation can be utilized by a monocular SLAM framework for successful scene reconstruction. For more information please visit
this project .
Related Publications