Grace Tsai and Benjamin Kuipers. 2014.
Handling perceptual clutter for robot vision with partial model-based interpretations.
IEEE/RSJ Int. Conf. Intelligent Robots and Systems (IROS), 2014.

Abstract

For a robot to act in the world, it needs to build and maintain a simple and concise model of that world, from which it can derive safe opportunities for action and hazards to avoid. Unfortunately, the world itself is infinitely complex, containing aspects ("clutter") that are not well described, or even well approximated, by the simple model. An adequate explanatory model must therefore explicitly delineate the clutter that it does not attempt to explain. As the robot searches for the best model to explain its observations, it faces a three-way trade-off among the coverage of the model, the degree of accuracy with which the model explains the observations, and the simplicity of the model. We present a likelihood function that addresses this trade-off. We demonstrate and evaluate this likelihood function in the context of a mobile robot doing visual scene understanding. Our experimental results on a corpus of RGB-D videos of cluttered indoor environments demonstrate that this method is capable of creating a simple and concise planar model of the major structures (ground plane and walls) in the environment, while separating out for later analysis segments of clutter represented by 3D point clouds.

Download

Dataset


BJK