Benjamin Kuipers. 2016.
Human-like morality and ethics for robots.
AAAI-16 Workshop on AI, Ethics & Society, 2016.

Abstract

Humans need morality and ethics to get along constructively as members of the same society. As we face the prospect of robots taking a larger role in society, we need to consider how they, too, should behave toward other members of society. To the extent that robots will be able to act as agents in their own right, as opposed to being simply tools controlled by humans, they will need to behave according to some moral and ethical principles.

Inspired by recent research on the cognitive science of human morality, we propose the outlines of an architecture for morality and ethics in robots. As in humans, there is a rapid intuitive response to the current situation. Reasoned reflection takes place at a slower time-scale, and is focused more on constructing a justification than on revising the reaction. However, there is a yet slower process of social interaction, in which both the example of action and its justification influence the moral intuitions of others. The signals an agent provides to others, and the signals received from others, help each agent determine which others are suitable cooperative partners, and which are likely to defect.

This moral architecture is illustrated by several examples, including identifying research results that will be necessary for the architecture to be implemented.

Download


BJK