Benjamin Kuipers, P.I.
Silvio Savarese, co-P.I.
University of Michigan
Research grant (CPS-0931474) from NSF Cyber-Physical Systems program, 2009-2014.
The physical environment of a cyber-physical system is unboundedly complex, changing continuously in time and space. An embodied cyber-physical system, embedded in the physical world, will receive a high bandwidth stream of sensory information, and may have multiple effectors with continuous control signals. In addition to dynamic change in the world, the properties of the cyber-physical system itself -- its sensors and effectors -- change over time. How can it cope with this complexity?
Our hypothesis is that a successful cyber-physical system will need to be a learning agent, learning the properties of its sensors, effectors, and environment from its own experience, and adapting over time. Inspired by human developmental learning, we believe that foundational concepts such as Space, Object, Action, etc., are essential for such a learning agent to abstract and control the complexity of its world. To bridge the gap between continuous interaction with the physical environment, and discrete symbolic descriptions that support effective planning, the agent will need multiple representations for these foundational domains, linked by abstraction relations.
In previous work, we have developed the Spatial Semantic Hierarchy (SSH), a hierarchy of representations for large-scale and small-scale space describing how a mobile learning agent (human or robot) can learn a cognitive map from exploration experience in its environment. The SSH shows how a local metrical map can be abstracted to local topological representations, which can be linked over time to construct a global topological map, which in turn can be used as the skeleton for a global metrical map. The robustness of human knowledge of space comes in part from the simultaneous availability of all of these representations.
Building on this approach, we are developing the Object Semantic Hierarchy (OSH), which shows how a learning agent can create a hierarchy of representations for objects it interacts with. The OSH shows how the ``object abstraction'' factors the uncertainty in the sensor stream into object models and object trajectories. These object models then support the creation of action models, abstracting from low-level motor signals.
To ensure generality across cyber-physical systems, our methods make only very generic assumptions about the nature of the sensors, effectors, and environment. However, to provide a physical testbed for rapid evaluation and refinement of our methods, we have designed a model laboratory robotic system to be built from off-the-shelf components, including a stereo camera, a pan-tilt-translate base, and a manipulator arm.
For dissemination and replication of our research results, the core system will be affordable and easily duplicated at other labs. We will distribute our plans, our control software, and the software for our experiments, to encourage other labs to replicate and extend our work. The same system will serve as a platform for an open-ended set of undergraduate laboratory tasks, ranging from classroom exercises, to term projects, to independent study projects. We have a preliminary design for a very inexpensive version of the model cyberphysical system that can be constructed from servo motors and pan-tilt webcams, for use in collaborating high schools and middle schools, to communicate the breadth and excitement of STEM research.
Intellectual Merit: This project bridges the gap between continuous dynamical interaction with the physical world, and discrete abstractions useful for creating high-level plans. It draws on state-of-the-art methods in artificial intelligence, robotics, and computer vision. The object and action abstractions help make the complexity of the world tractable to the agent. These abstractions must be learned by the agent to be a good fit to its sensors, effectors, and environment. The robotic testbed allows rapid experimentation and evaluation. These results will be important for cyber-physical systems operating in an unboundedly complex world.
Broader Impact: Our robotic testbed is a comprehensible experimental environment, accessible to a broad audience. It will introduce them to the concepts of cyber-physical systems, and more generally to the power and excitement of research in the STEM fields. We believe that the developmental learning perspective will help attract underrepresented groups, especially girls, to the problem of ``teaching the robot to learn'' about its world. Robotics projects help children experience the ``parent's perspective'' of teaching their robot to do something, and then watching anxiously to see whether their creation will actually succeed.
Grace Tsai. 2014.
On-line, incremental visual scene understanding
for an indoor navigating robot.
Doctoral dissertation,
Department of Electrical Engineering and Computer Science,
University of Michigan.
Grace Tsai and Benjamin Kuipers. 2014.
Handling perceptual clutter for robot vision with partial model-based interpretations.
IEEE/RSJ Int. Conf. Intelligent Robots and Systems (IROS), 2014.
Grace Tsai, Collin Johnson, and Benjamin Kuipers. 2014.
Semantic visual understanding of indoor environments: from structures to opportunities for action.
Vision Meets Cognition Workshop (CVPR), 2014.
R. Mittelman, M. Sun, B. Kuipers and S. Savarese. 2014.
A Bayesian generative model for learning semantic hierarchies.
Frontiers in Psychology: Hypothesis and Theory
5(417): 1-9, May 2014.
R. Mittelman, B. Kuipers, S. Savarese and H. Lee. 2014.
Structured Recurrent Temporal Restricted Boltzmann Machines.
Int. Conf. on Machine Learning (ICML).
Grace Tsai and Benjamin Kuipers. 2013.
Focusing attention on visual features that matter.
British Machine Vision Conference (BMVC), 2013.
R. Mittelman, M. Sun, B. Kuipers and S. Savarese. 2013.
Learning hierarchical linguistic descriptions of visual datasets.
2013 Conf. North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL HLT).
R. Mittelman, H. Lee, B. Kuipers and S. Savarese. 2013.
Weakly supervised learning of mid-level features with Beta-Bernoulli process restricted Boltzmann machines.
IEEE Int. Conf. on Computer Vision and Pattern Recognition (CVPR).
Paul Foster, Zhenghong Sun, Jong Jin Park and Benjamin Kuipers. 2013.
VisAGGE: Visible Angle Grid for Glass Environments.
IEEE Int. Conf. on Robotics and Automation (ICRA-13).
Jong Jin Park and Benjamin Kuipers. 2013.
Autonomous person pacing and following with
Model Predictive Equilibrium Point Control.
IEEE Int. Conf. on Robotics and Automation (ICRA-13).
Changhai Xu, Jingen Liu and Benjamin Kuipers. 2012.
Moving object segmentation using motor signals.
European Conf. on Computer Vision (ECCV), 2012.
Grace Tsai and Benjamin Kuipers. 2012.
Dynamic visual understanding of the local environment
for an indoor navigating robot.
IEEE/RSJ Int. Conf. on Intelligent Robots and Systems (IROS), 2012.
Jong Jin Park, Collin Johnson, and Benjamin Kuipers. 2012.
Robot Navigation with Model Predictive Equilibrium Point Control.
IEEE/RSJ Int. Conf. on Intelligent Robots and Systems (IROS), 2012.
Collin Johnson and Benjamin Kuipers. 2012.
Efficient search for correct and useful topological maps.
IEEE/RSJ Int. Conf. on Intelligent Robots and Systems (IROS), 2012.
Benjamin Kuipers. 2012. An existing, ecologically-successful genus of collectively intelligent artificial creatures. Collective Intelligence (CI-2012).
Grace Tsai, Changhai Xu, Jingen Liu and Benjamin Kuipers. 2011.
Real-time indoor scene understanding
using Bayesian filtering with motion cues.
Int. Conf. on Computer Vision (ICCV), 2011.
Jingen Liu, Benjamin Kuipers and Silvio Savarese. 2011.
Recognizing human actions by attributes.
IEEE Conf. Computer Vision and Pattern Recognition (CVPR-11).
Jingen Liu, Mubarak Shah, Benjamin Kuipers and Silvio Savarese. 2011.
Cross-view action recognition via view knowledge transfer.
IEEE Conf. Computer Vision and Pattern Recognition (CVPR-11).
Changhai Xu, Jingen Liu and Benjamin Kuipers. 2011.
Motion segmentation by learning homography matrices
from motor signals.
Canadian Conference on Computer and Robot Vision (CRV-11).
Winner: Best Student Paper Award
Changhai Xu and Benjamin Kuipers. 2011.
Object detection using principal contour fragments.
Canadian Conference on Computer and Robot Vision (CRV-11).
Jong Jin Park and Benjamin Kuipers. 2011.
A smooth control law for graceful motion of differential wheeled
mobile robots in 2D environments.
IEEE Int. Conf. on Robotics and Automation (ICRA-11).
Jeremy Stober, Risto Miikkulainen and Benjamin Kuipers. 2011.
Learning geometry from sensorimotor experience.
First Joint Conf. Development and Learning
and Epigenetic Robotics, to appear.
Changhai Xu and Benjamin Kuipers. 2010.
Towards the Object Semantic Hierarchy.
Ninth IEEE Int. Conf. on Development and Learning (ICDL-10).
*M. Sun, *S. Ying-Ze Bao and Silvio Savarese, (*indicates equal contributions)
Object detection with geometrical context feedback loop,
British Machine Vision Conference (BMVC), 59.1-59.11, 2010, oral presentation.
PDF,
bibtex
W. Choi and S. Savarese,
Multiple target tracking in world coordinate with single, minimally calibrated camera,
European Conference of Computer Vision, 553-567, 2010
PDF,
bibtex
M. Sun, G. Bradsky, B. Xu, and S. Savarese,
Depth-encoded Hough voting for joint object detection and shape recovery,
European Conference of Computer Vision, 658-671, 2010
PDF,
bibtex
These are related, but prior to this project, or supported by other funding.
Changhai Xu. 2011.
Steps Towards the Object Semantic Hierarchy
Doctoral dissertation, Computer Science Department,
University of Texas at Austin.
Jonathan Mugan. 2010. Autonomous Qualitative Learning of Distinctions and Actions in a Developing Agent. Doctoral dissertation, Computer Science Department, The University of Texas at Austin.
Jonathan's video describing QLAP won the Best Educational Video award at the 2010 AAAI Video Competition.
Patrick Beeson, Joseph Modayil, Benjamin Kuipers.
Factoring the mapping problem: Mobile robot map-building in the
Hybrid Spatial Semantic Hierarchy
International Journal of Robotics Research 29(4): 428-459, 2010.
Jonathan Mugan and Benjamin Kuipers. 2009.
Autonomously learning an action hierarchy using
a learned qualitative state representation.
International Joint Conference on Artificial Intelligence (IJCAI-09).
Joseph Modayil and Benjamin Kuipers. 2008.
The initial development of object knowledge by a learning robot.
Robotics and Autonomous Systems 56: 879--890.
Jonathan Mugan and Benjamin Kuipers. 2008.
Towards the application of reinforcement learning
to undirected developmental learning..
International Conference on Epigenetic Robotics (Epirob-08).
Jeremy Stober and Benjamin Kuipers. 2008.
From pixels to policies: a bootstrapping agent.
IEEE International Conference on Development and Learning (ICDL-08).
Jonathan Mugan and Benjamin Kuipers. 2008.
Continuous-domain reinforcement learning
using a learned qualitative state representation.
International Workshop on Qualitative Reasoning (QR-08).
Changhai Xu, Yong Jae Lee, and Benjamin Kuipers. 2008.
Ray-based color image segmentation.
Canadian Conference on Computer and Robot Vision, 2008.
Jonathan Mugan and Benjamin Kuipers. 2007.
Learning distinctions and rules in a continuous world
through active exploration..
7th International Conference on Epigenetic Robotics (Epirob-07).