In the past few years, the technologies of artificial intelligence and machine learning have seemingly moved from the realm of science fiction and into our everyday lives. From autonomous vehicles to intelligent assistants like Siri and Alexa to the algorithmic systems of content moderation used by Twitter and Facebook, the growing integration of artificial agents into human social, economic, and political life has highlighted some pressing ethical dilemmas for users, designers, and regulators alike. How can we as human beings make informed and ethical decisions about the highly complex, often to the point of being impenetrable, AI-based decision systems that many of us are increasingly reliant upon? In some cases, AI ethics touches on fundamental questions of personhood, and on life-and-death decisions made in the context of war or medicine. But other, less dramatic but no less significant arenas, the questions about security, privacy, surveillance, social manipulation, financial stability, and equity, are no longer theoretical. Artificial intelligence has become a reality that citizens, technologists, and policy-makers must reckon with in the present, not some distant future.
Artificial Intelligence at IU
Indiana University offers a unique combination of depth and nearly unparalleled breadth in AI, including major research in AI and genomics, machine learning and computer vision (in partnership with Naval Surface Warfare Center Crane), connectome models of the human brain, early detection of Alzheimer's disease in human brain scans, and medical image analysis. The university offers a course in AI Ethics. In addition, its Emerging Areas of Research program recently funded a $2.5-million initiative to study connections between machine learning and human learning. The university has made major investments in AI technologies and is in the process of acquiring the state's first AI supercomputer. Learn more about IU's research impact in the field of AI.
Keywords: artificial intelligence, machine learning, natural language processing, facial recognition, autonomous and sensor-equipped vehicles, and robotics
Convener(s): Nathan Ensmenger
Project Activities To Date
AI and Work - Your AI is a Human, a Polemic...with Sarah T. Roberts, May 14, 2021
This webinar considers the challenge of defining AI, and how the role of humans in these systems is obscured by the rhetoric and narrative of what AI is, and is not.
Resource List for AI and Work - Your AI is a Human
Teaching AI Ethics Roundtable with Casey Fiesler, Kelly Joyce, Jonnie Penn, Ben Peters, and Jennifer Terrell, May 11, 2021
A lively conversation about teaching ethics including the challenge of definitions and terms, and disciplinary differences. Panelists share teaching resources, tips, and insights.
The Long History of Transphobic Algorithmic Bias and Its Connections to Automated "Gender Recognition" Technologies with Mar Hicks, April 22, 2021
An examination of the British government's increasingly computerized methods for tracking, identifying, and defining citizens, including transgender Britons, in the 1950s. See related paper:
Hicks, Mar. 2019 Hacking the Cis-tem, IEEE Annals of the History of Computing, Volume: 41, Issue: 1. DOI 10.1109/MAHC.2019.2897667
Resource List for The Long History of Transphobic Algorithmic Bias
Promoting Public Trust in AI with Bran Knowles, March 18, 2021
Trust is a human value but what is it and what role does it play in the design, development and use of AI?
Key Takeaways from webinar:
- Trustworthiness as moral imperative vs promoter of adoption
- Trust in a particular AI vs trust in pervasive AI as an institution
- Public Trust cannot be based on ability to identify and evaluate particular Ais
- Focusing on an AI’s fairness, explainability, robustness etc. has little impact on public trust (or its lack)
The Right(s) Question: Can and Should Robots Have Rights? with David Gunkel, February 11, 2021
In a recent proposal issued by the European Parliament it was suggested that robots and AI might need to be considered “electronic persons” for the purposes of social and legal integration. The very idea sparked controversy, and it has been met with considerable resistance. Underlying the controversy, however, is an important ethical question: When (if ever) would it be necessary for robots, AI, or other socially interactive, autonomous systems to have some claim to moral and legal standing? When (if ever) would a technological artifact be considered more than a mere instrument of human action and have some legitimate claim to independent social status? In this presentation, David Gunkel, author of the books The Machine Question (MIT Press 2012), Robot Rights (MIT Press 2018), and How to Survive a Robot Invasion (Routledge 2020) offers a provocative argument for giving serious consideration to what has been previously regarded as unthinkable: whether robots and other technological artifacts of our own making can and should have a claim to moral and legal consideration.
Resource List for The Right(s) Question: Can and Should Robots Have Rights?
The Problem with the Trolley Problem: Ethics and Autonomous Vehicles with Jack Stilgoe and Jameson Whetmore, June 18, 2020
This webinar moves the discussion surrounding autonomous vehicles beyond that of the trolley problem by asking key questions including:
- Who benefits from new technologies?
- Who pays for new technologies?
- What is required for a self driving car to do what it promises to do in the world?
- Why does it matter when the story of autonomous vehicles begins?
- How might autonomous vehicles change what it means to be a parent?
- How could citizens increase their role in deciding what the acceptable risks are for autonomous vehicles?
- How have different values been leveraged historically to argue for automation?
- How are values implicit in autonomous vehicles?
Resources by Topic
- Reading
Larsson, Stefan, and Fredrik Heintz. 2020. Transparency in artificial intelligence. Internet Policy Review 9, no. 2 - Case Study
Hiring by Machine, Case Study #5 Princeton
The development of artificial intelligence (AI) systems and their deployment in society gives rise to ethical dilemmas and hard questions. This is one of a set of fictional case studies that are designed to elucidate and prompt discussion about issues in the intersection of AI and Ethics. As educational materials, the case studies were developed out of an interdisciplinary workshop series at Princeton University that began in 2017-18. They are the product of a research collaboration between the University Center for Human Values (UCHV) and the Center for Information Technology Policy (CITP) at Princeton.
View a PDF with information from the Hiring By Machine case study
- Course Syllabi
Algorithms in Society (Berkeley) - Reading
Friedman, Batya, and Helen Nissenbaum. 1996. Bias in computer systems. ACM Transactions on Information Systems 14(3), 330-347. - Case Study
Automated Healthcare App, Case Study #1, Princeton
The development of artificial intelligence (AI) systems and their deployment in society gives rise to ethical dilemmas and hard questions. This is one of a set of fictional case studies that are designed to elucidate and prompt discussion about issues in the intersection of AI and Ethics. As educational materials, the case studies were developed out of an interdisciplinary workshop series at Princeton University that began in 2017-18. They are the product of a research collaboration between the University Center for Human Values (UCHV) and the Center for Information Technology Policy (CITP) at Princeton.
View a PDF with information from the Automated Healthcare App case study
- Book
Robot Rights by David Gunkel, 2018 MIT Press - Video
Robot Rights with David Gunkel, Machine Ethics Podcast October 2020 - Video
Do Robots Deserve Rights? What if Machines Become Conscious? - Case Study
Dynamic Sound Identification, Case Study #2, Princeton
The development of artificial intelligence (AI) systems and their deployment in society gives rise to ethical dilemmas and hard questions. This is one of a set of fictional case studies that are designed to elucidate and prompt discussion about issues in the intersection of AI and Ethics. As educational materials, the case studies were developed out of an interdisciplinary workshop series at Princeton University that began in 2017-18. They are the product of a research collaboration between the University Center for Human Values (UCHV) and the Center for Information Technology Policy (CITP) at Princeton.
View a PDF with information from the Dynamic Sound Identification case study
- Reading
Schebesch, Klaus Bruno. 2019. The Interdependence of AI and Sustainability: Can AI Show a Path Toward Sustainability? In Griffiths School of Management and IT Annual Conference on Business, Entrepreneurship and Ethics, pp. 383-400. Springer. - Reading
Pettersen, Ida N. 2008. The Ethics in Balancing Control and Freedom when Engineering Solutions for Sustainable Behaviour. International Journal of Sustainable Engineering 1(4), 287-297.
Resources by Type
- (2020). Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society. Association for Computing Machinery, New York, NY, USA.
- Boddington, P. (2017) Towards a Code of Ethics for Artificial Intelligence (Artificial Intelligence: Foundations, Theory, and Algorithms) Springer.
- Bouville, M. (2008) On Using Ethical Theories to Teach Engineering Ethics Science and Engineering Ethics vol 14, 111-120.
- Burton, Emanuelle et al. (2015) Teaching AI Ethics Using Science Fiction. AAAI Workshops, North America, April 2015, 33-37
- Burton, Emanuelle et al. (2017). Ethical Considerations in Artificial Intelligence Courses. AI Magazine, vol 38, 2, 22-34.
- Christian, B. (2020) The Alignment Problem: Machine Learning and Human Values. W.W. Norton & Company.
- Coeckelbergh, Mark. (2020) AI Ethics (The MIT Press Essential Knowledge series), MIT Press.
- Dignum, V. (2019) Responsible Artificial Intelligence: How to Develop and Use AI in a Responsible Way (Artificial Intelligence: Foundations, Theory, and Algorithms). Springer.
- Dubber, M., Pasquale, F. et al. (2020) The Oxford Handbook of Ethics of AI (Oxford Handbook Series). Oxford University Press
- Franks, Bill. (2020) 97 Things About Ethics Everyone in Data Science Should Know: collective wisdom from the experts. O'Reilly Media.
- Friedman, Batya, and Helen Nissenbaum. 1996 Bias in computer systems. ACM Transactions on Information Systems 14(3), 330-347.
- Furey, Heidi and Martin, Fred. (2018). AI Education Matters: a modular approach to AI Ethics Education. AI Matters, vol 4 (4), 13-15
- Gellers, Joshua C.(2020). Rights for Robots: artificial intelligence, animal and environmental law. Routledge.Open Access Book available.
- IEEE (2019.) Ethically Aligned Design: a vision for prioritizing human well-being with autonomous and intelligent systems. The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems. First Edition.
- Larsson, Stefan, and Fredrik Heintz. 2020. Transparency in artificial intelligence. Internet Policy Review 9, no. 2
- Liao, S.M. (2020) Ethics of Artificial Intelligence. Oxford University Press.
- Miller, Vincent C (2020) Ethics of Artificial Intelligence and Robotics, in Edward Sakta (ed.) The Stanford Encyclopedia of Philosophy
- Schebesch, Klaus Bruno. 2019. The Interdependence of AI and Sustainability: Can AI Show a Path Toward Sustainability? In Griffiths School of Management and IT Annual Conference on Business, Entrepreneurship and Ethics, pp. 383-400. Springer.
- Singer, Peter. (2009). Wired for war : the robotics revolution and conflict in the 21st century. Penguin Press.
- AI Principles Map
Understanding the trends, common threads, and differences among published AI Ethics principle statements. - ACM Code of Ethics and Professional Conduct
- Engineering and Physical Sciences Research Council - Principles of Robotics
- IEEE.org
- IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems
The IEEE Global Initiative's mission is, "To ensure every stakeholder involved in the design and development of autonomous and intelligent systems is educated, trained, and empowered to prioritize ethical considerations so that these technologies are advanced for the benefit of humanity." - National AI policies and Strategies
A live repository of over 300 AI policy initiatives from 60 countries, territories and the EU.
- Ethics in Context
From the University of Toronto. Videos, Podcasts, and more on multiple areas of ethics and technology, including AI. - Embedded Ethics, Repository of Open Source Course Modules
These modules are made available under a Creative Commons Attribution 4.0 International license (CC BY 4.0). - Explore AI Ethics
A curated directory of educational resources for teaching and learning about the ethics of artificial intelligence. Organized and searchable by both topic and category, each page contains either a short excerpt from the resource, a press release, a video, or in cases where permission has been granted, the full text. - Syllabi Collection from the Center for the Study of Ethics in the Professions
Illinois Institute of Technology