Self-supervised learning in indoor environment
Abstract
Self-supervised depth estimation shows promising results in outdoor environments. However, there are a few works targeting indoor and more arbitrary scenarios. After a series of experiments, we found that one reason may be the current architecture is not able to train the network with high variation ego-motion sequences. The existing methods usually rely on Kitti, Cityscape training dataset which mostly consists of onward motion. When testing existing methods on indoor datasets (such as TUM-RGBD, NYU ), on most of them, the training simply failed by outputting zero-depth image.
The goal of this project is to investigate this issue and try to find a solution for training a self-supervised indoor depth estimation network.
Possible directions:
1. Try to improve the pose estimation by using pre-trained pose networks, such as Sfmlearner, Flownet2.0, and fine-tuning.
2. Find a way to train the pose network end-to-end by improving the poseNet architecture, designing a good loss function and a good training method.
Requirement
Basic C++ and Python skill.
Moderate understanding of deep learning.
Moderate understanding of pose estimation and multiview geometry.
Literature
-Depth-
(vid2depth) Unsupervised Learning of Depth and Ego-Motion from Monocular Video Using 3D Geometric Constraints code
(monodepth2) Digging Into Self-Supervised Monocular Depth Estimation
(struct2depth) Depth Prediction without the Sensors: Leveraging Structure for Unsupervised Learning from Monocular Videos
(geoNet) Unsupervised Learning of Dense Depth, Optical Flow and Camera Pose
-Pose-
(sfmleaner) Unsupervised Learning of Depth and Ego-Motion from Video
flowNet 2.0: Evolution of Optical Flow Estimation with Deep Networks