RI Small:
Robot developmental learning of skilled actions

Benjamin Kuipers, PI
University of Michigan

Research grant (IIS-1421168) from NSF Robust Intelligence program, 2014-2017.

Project Summary

Our goal is to show how a robot can learn its own concepts of objects and actions, learning to take actions and manipulate objects the way a human child does. A child learns first to control its own body, then to have deliberate effects on certain objects that it can recognize. Over time it learns a hierarchy of increasingly complex actions, and an increasing depth of knowledge about the objects it encounters and the affordances they provide. We claim that for a robot to learn to act at a human level of skill and robustness in ordinary human environments, it will require the same kind of learning sequence.

Our approach to achieve this kind of learning behavior draws on extensive prior work on foun- dational knowledge representations and machine learning to bootstrap from "pixel-level" input to "object level" representations. In our framework, learning begins by detecting low-level reg- ularities — contingencies among observed events — and refines them into increasingly reliable predictive rules, that can be used to define actions. For a sufficiently reliable rule, a simple MDP model can be formulated, and reinforcement learning methods can learn a policy for accomplishing an action at the next level of the action hierarchy.

The quality of policies and therefore of learned actions will improve with experience. Intrinsic motivation methods that reward actions for successful learning will help the robot initially. But as knowledge accumulates, the robot’s learning needs more guidance, which we believe comes from learning from imitation. In our approach, when the robot observes more expert behavior, it creates a qualitative representation of that behavior, which it uses to formulate a goal (or reward function) for its learning. The success or failure of the resulting learning tells the robot whether the qualitative representation was sufficiently accurate, and may require further observations and a better representation of the behavior before imitation can be successful.

We propose to develop and evaluate our theories using the bimanual Baxter robot in our lab, and its existing physically realistic simulator, both accessible through ROS. A curriculum of learning tasks involving children’s toys will allow us to evaluate both the learning of skilled actions, and the ability to guide that learning through imitation.

Intellectual Merit: Foundational knowledge of actions, and knowledge of the opportunities for action (affordances) that objects provide, are necessary for a robot to function at a human level in a realistically complex environment. Understanding how this knowledge can be acquired through autonomous learning without human guidance or programming addresses an important scientific question at the foundation of artificial intelligence. It will also address an important technological need: robots that can learn to act safely and skilfully in complex human environments.

Broader Impact: This project will train two doctoral students in computer science, robotics, computer vision, machine learning, and control, helping to meet critical national needs. Better models of developmental learning of skilled and robust action hierarchies could lead to advances in both diagnosis and remediation of learning disabilities. We have also found that demonstrations of robots and robot learning are very effective means for outreach to the general public, including encouraging students toward further education in STEM fields.


The full set of papers on our developmental robotics research is available.

This work has taken place in the Intelligent Robotics Lab in the Computer Science and Engineering Division of the Electrical Engineering and Computer Science Department at the University of Michigan. Research of the Intelligent Robotics lab is supported in part by grant IIS-1421168 from the National Science Foundation.