Artificial Intelligence, Machine Learning, and Robotics have seen dramatic progress in the last several decades. There is increasing excitement and apprehension about the impact of these technologies, deployed in our world and our human society.
Ethics is the discipline within philosophy that considers which actions we humans see as right or wrong, or as good or bad. As we design intelligent artifacts that make their own decisions about how to act, and as they act within the human world, we ask how we can ensure that they will act ethically.
Two important questions arise.
First, like any other powerful technology (e.g. nuclear power, genetic engineering), there are important ethical questions about how AI and robotics technology can and should be deployed, and what its impact will be on society. This topic includes regulations, and the processes by which regulations are proposed, adopted, and enforced.
Second, unlike other technologies, AI (and thus intelligent robotics) involves creating agents that make their own decisions about how to act in the world. Ethics is a kind of foundational knowledge that humans use to decide how to act. We need to understand the structure of that knowledge, so the AIs we create will have the knowledge they need to act appropriately.
Do we mean that humans must be ethical as we design and deploy intelligent systems? Do we mean that the systems we design and deploy must be capable of deciding what is ethical for them to do? Most likely, the answers to both questions will turn out to be “Yes!” The follow-on question is “How do we do that?”
The semester will be organized around seven major topic areas:
(1) Safety: Autonomous vehicles and their driving decision.
(2) Background: Probability, game theory, philosophy, psychology
(3) Trust, cooperation, and society
(4) Bias and fairness
(5) Surveillance and privacy
(6) Trust for corporate entities: corporations, governments
(7) Existential risk
In the course of discussing research on these problem areas, we will draw on several concepts from philosophical ethics, but we will also consider perspectives from engineering design, law, economics, evolution, history, human development, etc.
An important question for researchers in artificial intelligence and robotics is how the knowledge relevant to making ethical decisions can be represented computationally in a knowledge base, and how it can be acquired.
The class will meet Mondays and Wednesdays, 4:30 to 6:00 pm, online (Zoom).
The first half will be a lecture. We anticipate having a number of guest lecturers. Put questions and comments into the Chat window, so the lecturer can respond at the end.
In the second half, we will divide into small groups to consider questions related to the lecture topic. After a while, the groups will report back, and we will identify points of agreement (group consensus) and points of disagreement (differences in values, trade-offs among values, and conflicts of perspective).
Each student will attend the classes, participate in the discussions, and write two papers. Attendance and participation will have significant weight in the course grade.
In the first paper, due at mid-term, you will formulate a question and review the available literature related to that question. The goal of your paper is to identify, clarify, and summarize the major positions on that question.
In the second paper, due at the end of the term, you will pick a question, take a position on how it should be answered, and justify your position, responding clearly to anticipated arguments from critics of that position. Focusing both papers on the same question is not required, but it will obviously make both papers stronger.
Here are two examples (one, two) of published papers by university professors. Each one illustrates both the literature review and the persuasive essay aspects of the papers you will write. Here is a third example (three) of a long literature review that contributes a detailed conceptual structure for its topic. You need not match any of these in length or style, but they are aspirational targets. For your paper, imagine that you are providing detailed help on a particular focused topic, to a friend who wants to get started doing research on that specific topic.
Nonzero describes human biological and cultural evolution in a framework provided by game theory. Get the book, and before the first class, read the Introduction, Chapters 1-3, and Appendix 1 (49 pages altogether).
There will be extensive assigned readings, and you will do a literature review on a topic of your choice, which will involve more reading. Be sure that you know how to use Google Scholar and the UM Library's Online Journal collection for tracking down references.
We will try to meet the needs of several distinct audiences with overlapping courses.
This course describes and discusses the ethical issues raised by AI and Robotics, reading and analyzing arguments from a number of disciplines, identifying and posing ethical questions, evaluating potential solutions, and formulating future research questions. The course will include guest lectures from experts in computer science, philosophy, cognitive science, psychology, public policy, law, etc.
For the undergraduate course (EECS 498), the two papers should demonstrate that you can search, find, and review good quality references beyond those handed out in class, and that you can put your own creative and critical insights into formulating a good problem and exploring solutions to it.
The expectation for the graduate course (EECS 598, ROB 599) will be (a) a deeper and more analytical literature review that identifies more related work beyond what has been handed out in class, and (b) a deeper and more thoughtful final term paper, anticipating and responding more effectively to critics.
EECS 598 has been approved to satisfy the following CSE Graduate Program requirements: depth (not cognate) requirement for the CSE PhD, and the 500-level and technical elective requirements for the CSE MS.
EECS 498 has been approved to satisfy the College of Engineering Intellectual Breadth requirement for the CS-Eng, DS-Eng, CE, and EE majors (verified on 11/23/2020). (Since this is a special topics course, it doesn't yet show up on the degree audit, but it is manually added after the Drop/Add deadline.)
This term we will be using Piazza for class discussion. The system is highly catered to getting you help fast and efficiently from classmates, the TA, and myself. Rather than emailing questions to the teaching staff, I encourage you to post your questions on Piazza. If you have any problems or feedback for the developers, email email@example.com.
Find our class signup link at:
We will be drawing on insights from readings and guest lecturers with a variety of different perspectives. Anyone working in this area should remember the important lesson of this children's poem:
Burn-In is a novel, written to demonstrate possible social impacts of AI and robotics technology over the next decade or two. Many end-notes giving citations for the reality of the technology. It vividly illustrates the potential for serious problems, and the technological extrapolations are well researched, but remember that this is only one of many possible futures.
The Ministry for the Future is another novel of a possible future, showing how humanity might respond to the threat of climate change over the next several decades. AI and robotics have only a small role, but the need and the difficulty of establishing trust and cooperation are central.
Evil Geniuses discusses the political economics of the last half-century, leading up to the current level of economic inequality. The author has a strong and clearly stated political position. Even if you disagree, you should understand and respond to his arguments.
My chapter in the Oxford Handbook of Ethics of AI provides an overview of some of the topics we will cover in the course.