Intelligent robots are becoming ever more present in our lives (think of the Google self-driving car) and ever more sophisticated. But we still face a huge challenge of creating robots that can function effectively, robustly, and safely in the complexity of the everyday human environment. Learning will be necessary to meet this challenge, but precisely what to learn, and how to represent what is learned, remain substantial challenges.
Fortunately, we have examples of intelligent creatures that learn sophisticated knowledge about complex environments: human infants and children. Insights from developmental psychology have inspired advances in knowledge representation, computer vision, and robot learning.
Previous work on representation and learning of knowledge about space, objects, actions, and so on, has largely focused on learning from autonomous experience, without concern for other agents. In this research seminar, we will draw on that foundation to investigate robot learning in domains that explicitly involve other agents. We will focus on three major themes.
"Laws of Motion" for Animate Agents. A baby (or baby robot) can learn simple rules for predicting the behavior of inanimate objects when pushed or bumped or carried or dropped. Useful predictions about animate agents requires an entirely different set of rules. An agent does not simply respond to external events and actions. An agent decides to take an action, to achieve its goals, in the context of its beliefs about the world. Our baby robot will need a "Theory of Mind" that includes a way to distinguish between mental states of self and others, and ways to represent goals and beliefs of different agents.
Imitation of Actions by Other Agents. An observed action by a more advanced agent makes a good target for learning. But what does it mean to imitate an action? The learner must be able to represent the relevant features of the observed behavior, and then use that behavior representation as a target for its own learning process. Can we devise a learning method that simultaneously learns to match the observed behavior and learns to improve the representation of that behavior to identify the critical features?
Social Norms for Choosing Actions. As artificial agents become more important in our society, how do we prevent them from taking pathological actions that are the stuff of science fiction movies? Game theory can help us understand the plausible equilibrium states in a society of agents, each of which acts to maximize some reward. The Prisoner's Dilemma game demonstrates that individual reward-maximization can lead to poor rewards across the society. We will consider the role of social norms that constraint action choices, improving total rewards across the entire society. What norms should apply to robots and other artificial agents?
Since we will be drawing on insights from multiple researchers with different perspectives, it is important for anyone working in this area to remember the important lesson of The Blind Men and the Elephant. We will gather clues where we can, to infer what our "elephant" looks like.
There will be many assigned readings from the research literature. Students will make presentations and lead discussions on state-of-the-art research, and will do a substantial term project culminating in a publication-quality paper. Within this problem area, each student will select the specific topic and methods for their term project to fit their own background and expertise.