David Pierce and Benjamin Kuipers. 1997.
Map learning with uninterpreted sensors and effectors.
Artificial Intelligence 92: 169-229, 1997.
At the lowest level of the hierarchy, the learning agent analyzes the effects of its motor control signals in order to define a new set of control signals, one of each of the robot's degrees of freedom. It uses a generate-and-test approach to define sensory features that capture important aspects of the environment. It uses linear regression to learn models that characterize context-dependent effects of the control laws for finding and following paths defined using constraints on the learned features. The agent abstracts these control laws, which interact with the continuous environment, to a finite set of actions that implement discrete state transitions. At this point, the agent has abstracted the robot's continuous world to a finite-state world and can use existing methods to learn its structure.
The learning agent's methods are evaluated on several simulated robots with different sensorimotor systems and environments.