2007-present: Engineer at Canon Inc., Tokyo, Japan.
2005-2007: Master student in Systems and Information Engineering at University of Tsukuba, Tsukuba, Japan. Thesis: A Nested Marker for Augmented Reality.
2001-2005: Bachelor student in Engineering systems at University of Tsukuba, Tsukuba, Japan. Thesis: Enhanced eyes for better gaze-awareness in collaborative mixed reality.
Real-Time and Scalable Incremental Segmentation on Dense SLAM
This work proposes a real-time segmentation method for 3D point clouds obtained via Simultaneous Localization And Mapping (SLAM). The proposed method incrementally merges segments obtained from each input depth image in a unified global model using a SLAM framework. Differently from all other approaches, our method is able to yield segmentation of scenes reconstructed from multiple views in real-time, with a complexity that does not depend on the size of the global model.
When 2.5D is not enough: Simultaneous Reconstruction, Segmentation and Recognition on dense SLAM
While the main trend of 3D object recognition has been to infer object detection from single views of the scene --- i.e., 2.5D data --- this work explores the direction on performing object recognition on 3D data that is reconstructed from multiple viewpoints, under the conjecture that such data can improve the robustness of an object recognition system. To achieve this goal, we propose a framework which is able (i) to carry out incremental real-time segmentation of a 3D scene while being reconstructed via Simultaneous Localization And Mapping (SLAM), and (ii) to simultaneously and incrementally carry out 3D object recognition and pose estimation on the reconstructed and segmented 3D representations.
Large Scale and Long Standing Simultaneous Reconstruction and Segmentation
This work proposes a method to segment a 3D point cloud of a scene while simultaneously reconstructing it via Simultaneous Localization And Mapping (SLAM). The proposed method incrementally merges segments obtained from each input depth image in an unified global model leveraging the camera pose estimated via SLAM. Differently from other approaches, our method is able to yield segmentation of scenes reconstructed from multiple views in real-time and with a complexity that does not depend on the size of the global model. Moreover, we endow our system with two additional contributions: a loop closure approach and a failure recovery and re-localization approach, both specifically designed so to enforce global consistency between merged segments, thus making our system suitable for large scale and long standing reconstruction and segmentation.
CNN-SLAM: Real-time dense monocular SLAM with learned depth prediction
Given the recent advances in depth prediction from Convolutional Neural Networks (CNNs), this paper investigates how predicted depth maps from a deep neural network can be deployed for accurate and dense monocular reconstruction. We propose a method where CNN-predicted dense depth maps are naturally fused together with depth measurements obtained from direct monocular SLAM. Our fusion scheme privileges depth prediction in image locations where monocular SLAM approaches tend to fail, e.g. along low-textured regions, and vice-versa. We demonstrate the use of depth prediction for estimating the absolute scale of the reconstruction, hence overcoming one of the major limitations of monocular SLAM. Finally, we propose a framework to efficiently fuse semantic labels, obtained from a single frame, with dense SLAM, yielding semantically coherent scene reconstruction from a single view. Evaluation results on two benchmark datasets show the robustness and accuracy of our approach.