Benjamin Kuipers. 2020.
Perspectives on Ethics of AI: Computer Science.
To appear in Markus Dubber, Frank Pasquale, and Sunit Das (Eds.),
Oxford Handbook of Ethics of AI, Oxford University Press, 2020.
AI is a collection of computational methods for studying human knowledge, learning, and behavior, including by building agents able to know, learn, and behave. Ethics is a body of human knowledge, far from completely understood, that helps agents (humans today, but perhaps eventually robots and other AIs) decide how they and others should behave. The ethical issues raised by AI fall into two overlapping groups.
First, potential deployments of AI raise ethical questions about the impacts they may have on human well-being, just like other powerful tools or technologies such as nuclear power or genetic engineering.
Second, unlike other technologies, intelligent robots and other AIs have the potential to be considered as members of our society. Since they will make their own decisions about the actions they take, it is appropriate for humans to expect them to behave ethically. This requires AI research with the goal of understanding the structure, content, and purpose of ethical knowledge, well enough to implement ethics in artificial agents.
This chapter describes a computational view of the function of ethics in human society, and discusses its application to three diverse examples.