C. Rupprecht, C. Lea, F. Tombari, N. Navab, G. D. Hager
Sensor Substitution for Video-based Action Recognition IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2016), Daejeon, Korea, October 2016 (bib) |
||
There are many applications where domain-specific sensing, for example accelerometers, kinematics, or force sensing, provide unique and important information for control or for analysis of motion. However, it is not always the case that these sensors can be deployed or accessed beyond laboratory environments. For example, it is possible to instrument humans or robots to measure motion in the laboratory in ways that it is not possible to replicate in the wild. An alternative, which we explore in this paper, is to use situations where sensing is available to train a substitute algorithm operating from available sensor data such as video. We present two examples of this sensory substitution methodology. The first variation trains a convolutional neural network to regress a real-valued signal -- robot end-effector pose -- from video. The second example regresses binary signals detecting when specific objects are in motion. We evaluate these on the JIGSAWS dataset for robotic surgery training assessment and the 50 Salads dataset for modeling complex structured cooking tasks. We evaluate the trained models for video-based action recognition and show that the trained models provide information that is comparable to the sensory signals they replace. | ||
This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each authors copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder. |