Learning to Sense Robustly and Act Effectively

Benjamin Kuipers, P.I.
Silvio Savarese, co-P.I.
University of Michigan

Research grant (CPS-0931474) from NSF Cyber-Physical Systems program, 2009-2014.

Project Summary

The physical environment of a cyber-physical system is unboundedly complex, changing continuously in time and space. An embodied cyber-physical system, embedded in the physical world, will receive a high bandwidth stream of sensory information, and may have multiple effectors with continuous control signals. In addition to dynamic change in the world, the properties of the cyber-physical system itself -- its sensors and effectors -- change over time. How can it cope with this complexity?

Our hypothesis is that a successful cyber-physical system will need to be a learning agent, learning the properties of its sensors, effectors, and environment from its own experience, and adapting over time. Inspired by human developmental learning, we believe that foundational concepts such as Space, Object, Action, etc., are essential for such a learning agent to abstract and control the complexity of its world. To bridge the gap between continuous interaction with the physical environment, and discrete symbolic descriptions that support effective planning, the agent will need multiple representations for these foundational domains, linked by abstraction relations.

In previous work, we have developed the Spatial Semantic Hierarchy (SSH), a hierarchy of representations for large-scale and small-scale space describing how a mobile learning agent (human or robot) can learn a cognitive map from exploration experience in its environment. The SSH shows how a local metrical map can be abstracted to local topological representations, which can be linked over time to construct a global topological map, which in turn can be used as the skeleton for a global metrical map. The robustness of human knowledge of space comes in part from the simultaneous availability of all of these representations.

Building on this approach, we are developing the Object Semantic Hierarchy (OSH), which shows how a learning agent can create a hierarchy of representations for objects it interacts with. The OSH shows how the ``object abstraction'' factors the uncertainty in the sensor stream into object models and object trajectories. These object models then support the creation of action models, abstracting from low-level motor signals.

To ensure generality across cyber-physical systems, our methods make only very generic assumptions about the nature of the sensors, effectors, and environment. However, to provide a physical testbed for rapid evaluation and refinement of our methods, we have designed a model laboratory robotic system to be built from off-the-shelf components, including a stereo camera, a pan-tilt-translate base, and a manipulator arm.

For dissemination and replication of our research results, the core system will be affordable and easily duplicated at other labs. We will distribute our plans, our control software, and the software for our experiments, to encourage other labs to replicate and extend our work. The same system will serve as a platform for an open-ended set of undergraduate laboratory tasks, ranging from classroom exercises, to term projects, to independent study projects. We have a preliminary design for a very inexpensive version of the model cyberphysical system that can be constructed from servo motors and pan-tilt webcams, for use in collaborating high schools and middle schools, to communicate the breadth and excitement of STEM research.

Intellectual Merit: This project bridges the gap between continuous dynamical interaction with the physical world, and discrete abstractions useful for creating high-level plans. It draws on state-of-the-art methods in artificial intelligence, robotics, and computer vision. The object and action abstractions help make the complexity of the world tractable to the agent. These abstractions must be learned by the agent to be a good fit to its sensors, effectors, and environment. The robotic testbed allows rapid experimentation and evaluation. These results will be important for cyber-physical systems operating in an unboundedly complex world.

Broader Impact: Our robotic testbed is a comprehensible experimental environment, accessible to a broad audience. It will introduce them to the concepts of cyber-physical systems, and more generally to the power and excitement of research in the STEM fields. We believe that the developmental learning perspective will help attract underrepresented groups, especially girls, to the problem of ``teaching the robot to learn'' about its world. Robotics projects help children experience the ``parent's perspective'' of teaching their robot to do something, and then watching anxiously to see whether their creation will actually succeed.


Related Recent Publications

These are related, but prior to this project, or supported by other funding.

The full set of papers on our bootstrap learning research is available.

This work has taken place in the Intelligent Robotics Lab in the Computer Science and Engineering Division of the Electrical Engineering and Computer Science Department at the University of Michigan. Research of the Intelligent Robotics lab is supported in part by grant CPS-0931474 from the National Science Foundation.