|290-8||Foundations for Beneficial AI||Buchak/Holliday||M 2-4||103 Moffit|
Instructors: Stuart Russell (CS); Lara Buchak and Wesley Holliday (Philosophy); Shachar Kariv (Economics).
This interdisciplinary course examines the application of ideas from philosophy and economics to decision making by AI systems on behalf of humans, and in particular to the problem of ensuring that increasingly intelligent AI systems remain beneficial to humans. Solving this problem requires designing AI systems whose objective is to satisfy human preferences while remaining necessarily uncertain as to what those preferences are. The course will study issues arising when applying these principles to make decisions on behalf of multiple humans and real (rather than idealized) humans. Topics include utility theory, bounded rationality, utilitarianism, altruism, interpersonal comparisons of utility, preference learning, plasticity of human preferences, epistemic uncertainty about preferences, decision making under risk, social choice theory, and inequality. Students will read papers from the literature in AI, philosophy, and economics and will work in interdisciplinary teams to develop substantial analyses in one or more of these areas. No advanced mathematical background is assumed, but students should be comfortable with formal arguments involving axioms and proofs.
All students will be waitlisted initially. In order to ensure a balance of disciplines in the course, final enrollment decisions will be made by the instructors by the end of the first week of class. Preference will be given to PhD students in CS, Philosophy, and Economics, but other well-prepared students with a particular interest in the course will be considered.Course home page