Anyone interested in AI Ethics should read Joseph Weizenbaum's book, Computer Power and Human Reason, W. H. Freeman, San Francisco, 1975. (You should know that Weizenbaum was the creator of ELIZA, the very first chatbot, in 1966.)
When I read it as a graduate student at the MIT AI Lab, I felt that he raised important points, but that he was unduly critical of the AI research community I belonged to. I wrote the short note below, summarizing my reactions. The ACM SIGART Newsletter (No. 58, June 1976), published my reaction, along with a much more negative review by John McCarthy, and a response to McCarthy's review by Joe Weizenbaum.
REACTIONS TO WEIZENBAUM'S BOOK
From: Benjamin Kuipers, April 24, 1976
MIT AI Lab Cambridge, Mass 02139, BEN@MIT-AI
"There are more things in Heaven and Earth, Horatio,
than are dreamt of in your philosophy."
-- Hamlet, Act I, Scene 5.
I had some strong reactions to Joe Weizenbaum's book, Computer Power and Human Reason. The book mentions some important concerns which are obscured by harsh and sometimes shrill accusations against the Artificial Intelligence research community. On the whole, it seems to me that the personal attacks distract and mislead the reader from more valuable abstract points. I strongly recommend Samuel Florman's article "In Praise of Technology" in the November, 1975, issue of Harper's Magazine to see a different opinion about the role of technology in modern society.
Some of the points below restate concerns which seem to have motivated Weizenbaum to write his book. Others are my own reactions to issues which he raises. In either case, I see ideas like these as being quite current in the AI community, so I was quite puzzled by Weizenbaum's vehement attacks on us for not sharing them.
1. It is important for a scientist to realize that the descriptive methods of his field capture only one aspect of the phenomena he studies.
2. Point 1 notwithstanding, it is a matter of personal faith whether there are aspects of the world which cannot be fully described by some scientific (i.e., empirical) method. It is clear, of course, that many important aspects of the world are beyond our current scientific methods.
3. A scientist should recognize the difference between descriptive and prescriptive statements. Descriptive statements can be based on scientific investigation; prescriptive statements are based on values. A belief that value judgments are trivial can lead the unwise to believe that prescriptive conclusions follow directly from descriptive data.
4. JW says "The very asking of the question, 'What does a judge (or a psychiatrist) know that we cannot tell a computer?' is a monstrous obscenity." (p. 226) On the contrary, it is a fantastically interesting and important question, deserving the attention of serious thinkers. The question is essentially, "What is the difference between wisdom and knowledge?" To declare the asking of such a question obscene is anti-intellectualism at its most blatant. What actually seems to worry JW, however, is not the question, but the potential for a foolish answer.
5. Assuming that we find it possible to build an intelligent computer, there will inevitably be an enormous cultural gulf between it and humans. Social scientists can say a great deal about the amount of common culture which is required between a professional and a client in many cases, such as a psychiatrist or a judge. This could make the use of a computer in one of these roles inappropriate as a technical judgment, rather than as a moral judgment.
6. It seems exceedingly unlikely that the very difficult problems of intelligence can be solved by "hackers" without a deep theory. The primary goal of AI is to develop the computational techniques which will allow such a theory to be formulated precisely. This often requires intimate acquaintance with deep theories in psychology, linguistics, neurophysiology, or mathematics, but important work is also done on "smaller" problems, such as chess or the Blocks World.
7. It is important for us in AI to emphasize that our work with computer models of human intelligence increases our wonder and respect for human beings. We find it very useful to view Man as a computational process, just as a doctor finds it useful to view Man as a biological organism; neither position reduces Man to "simply" that role. Familiarity with complexity breeds great respect, not contempt.
8. As scientists, we should be aware of the potential misunderstandings of our results, and take steps to counter them before they become part of the popular culture. We should be especially alert to deliberate misrepresentations of technical results for individual, corporate, or governmental advantage. A good example is the use of the "incomprehensible computer" as an excuse for bureaucratic errors.
9. It is important to clarify the nature of responsibility for actions of computer programs. As with other machines, the computer is used today as a tool of humans, who should bear the responsibility for their actions. The difficulty is that computers have a much greater capacity for independent and unanticipated action than other machines. However, until we know what it would mean for a computer to "suffer the consequences" of its actions, it must be emphatically clear that a human is responsible for any act of a machine.
10. It is bad to encourage people to take a simplistic view of Man. The most important potential misunderstanding of our work lies in ethics -- one's ethical responsibilities to a machine are apparently much less than to a human. It would be bad to undercut the ethical status of Man by encouraging a simplistic belief that Man is "only a machine".
Reflecting back on these comments after decades, I stand behind my reactions from 1976, but I may have been a bit optimistic about how widely these ideas are shared among the AI research community.