Benjamin Kuipers, PI
University of Michigan
Research grant (IIS-1421168) from NSF Robust Intelligence program, 2014-2020.
Our goal is to show how a robot can learn its own concepts of objects and actions, learning to take actions and manipulate objects the way a human child does. A child learns first to control its own body, then to have deliberate effects on certain objects that it can recognize. Over time it learns a hierarchy of increasingly complex actions, and an increasing depth of knowledge about the objects it encounters and the affordances they provide. We claim that for a robot to learn to act at a human level of skill and robustness in ordinary human environments, it will require the same kind of learning sequence.
Our approach to achieve this kind of learning behavior draws on extensive prior work on foun- dational knowledge representations and machine learning to bootstrap from "pixel-level" input to "object level" representations. In our framework, learning begins by detecting low-level regularities — contingencies among observed events — and refines them into increasingly reliable predictive rules, that can be used to define actions. For a sufficiently reliable rule, a simple MDP model can be formulated, and reinforcement learning methods can learn a policy for accomplishing an action at the next level of the action hierarchy.
The quality of policies and therefore of learned actions will improve with experience. Intrinsic motivation methods that reward actions for successful learning will help the robot initially. But as knowledge accumulates, the robot’s learning needs more guidance, which we believe comes from learning from imitation. In our approach, when the robot observes more expert behavior, it creates a qualitative representation of that behavior, which it uses to formulate a goal (or reward function) for its learning. The success or failure of the resulting learning tells the robot whether the qualitative representation was sufficiently accurate, and may require further observations and a better representation of the behavior before imitation can be successful.
We propose to develop and evaluate our theories using the bimanual Baxter robot in our lab, and its existing physically realistic simulator, both accessible through ROS. A curriculum of learning tasks involving children’s toys will allow us to evaluate both the learning of skilled actions, and the ability to guide that learning through imitation.
Intellectual Merit: Foundational knowledge of actions, and knowledge of the opportunities for action (affordances) that objects provide, are necessary for a robot to function at a human level in a realistically complex environment. Understanding how this knowledge can be acquired through autonomous learning without human guidance or programming addresses an important scientific question at the foundation of artificial intelligence. It will also address an important technological need: robots that can learn to act safely and skilfully in complex human environments.
Broader Impact: This project will train two doctoral students in computer science, robotics, computer vision, machine learning, and control, helping to meet critical national needs. Better models of developmental learning of skilled and robust action hierarchies could lead to advances in both diagnosis and remediation of learning disabilities. We have also found that demonstrations of robots and robot learning are very effective means for outreach to the general public, including encouraging students toward further education in STEM fields.
Jonathan E. Juett. 2021.
Towards Learning the Foundations of Manipulation Actions from Unguided Exploration.
PhD thesis, Computer Science & Engineering, University of Michigan, 2021.
Jonathan Juett and Benjamin Kuipers. 2019.
Learning and acting in peripersonal space: Moving, reaching, and grasping.
Frontiers in Neurorobotics 13:4, 2019.
Jonathan Juett and Benjamin Kuipers. 2018.
Learning to Grasp by Extending the Peri-Personal Space Graph.
IEEE/RSJ Int. Conf. Intelligent Robots and Systems (IROS), 2018.
How can we trust a robot?.
Communications of the ACM, 61(3): 86-95, March 2018.
Collin Johnson and Benjamin Kuipers.
Socially-aware navigation using topological maps and social norm learning.
AAAI/ACM Conf. on Artificial Intelligence, Ethics, and Society (AIES), 2018.
Emanuella Burton, Judy Goldsmith, Sven Koenig, Benjamin Kuipers, Nicholas Mattei, Toby Walsh.
Ethical considerations in artificial intelligence courses.
AI Magazine, Summer 2017.
Tom Williams, Collin Johnson, Matthias Scheutz and Benjamin Kuipers. 2017.
A tale of two architectures: A dual-citizenship integration of natural language and the cognitive map.
Int. Conf. Autonomous Agents and Multi-Agent Systems (AAMAS), 2017.
Jong Jin Park, Seungwon Lee and Benjamin Kuipers. 2017.
Discrete-time dynamic modeling and calibration of differential-drive mobile robots with friction.
IEEE Int. Conf. Robotics and Automation (ICRA), 2017.
Jonathan Juett and Benjamin Kuipers. 2016.
Learning to reach by building a representation of peri-personal space.
IEEE/RSJ Int. Conf. Humanoid Robots, 2016.
Benjamin Kuipers. 2016.
Human-like morality and ethics for robots.
AAAI-16 Workshop on AI, Ethics & Society, 2016.
Benjamin Kuipers. 2016.
Toward morality and ethics for robots.
AAAI Spring Symposium on Ethical and Moral Considerations in Non-Human Agents (EMCAI 2016), 2016.
Jong Jin Park and Benjamin Kuipers. 2015.
Feedback motion planning via non-holonomic RRT* for mobile robots.
IEEE/RSJ Int. Conf. Intelligent Robots and Systems (IROS), 2015.