3.1 - Embodied Intelligence, Agency, and Responsibility
Artificial Intelligence Policy
🧠 Think:
- How important is it for intelligence to be “embodied” in a physical agent that interacts with the “real” world, as opposed to digital representations of the world through text, images, video, and the internet? Does embodiment (or lack thereof) change how we think of learning?
- What about responsibility? Are AI agents responsible for their behavior? Should they be, legally? Should the companies that make them? Does whether agents are embodied change how we think about this responsibility?
🎧 Listen:
- Complexity podcast, Season 2, Episode 4: “Babies vs. Machines”
📖 Read:
- Mitchell, et al. “Fully Autonomous AI Agents Should Not be Developed”
🌐 Browse:
- Bostrom, “Ethical Issues in Artificial Intelligence” The source of the famous “paperclip” argument
- Liu and Wu, “A Brief History of Embodied Artificial Intelligence, and its Outlook”
- “NHTSA Finds Teslas Deactivated Autopilot Seconds Before Crashes”
- Ziegler, Bart. 2023. “When Will Cars Be Fully Self Driving?”
- “The evolving safety and policy challenges of self-driving cars”, Brookings
📚 Additional Resources:
- Ganesh, “The ironies of autonomy”
- Chan et al., “Infrastructure for AI Agents”
- Geistfeld, “A Roadmap for Autonomous Vehicles: State Tort Liability, Automobile Insurance, and Federal Safety Regulation”
- NHTSA, “Automated Vehicles for Safety”
- Google, AI Responsibility Report
- “Doing vs Allowing Harm” (especially the Trolley Problem section), Stanford Encyclopedia of Philosophy
- Himmelreich, Never Mind the Trolley: The Ethics of Autonomous Vehicles in Mundane Situations”
📝 Submit:
- Discussion question to course chat
TipTip
- 📖 Read, 🎧 Listen, and/or 📺 Watch items are required content for the day and should be completed before class.
- 🌐 Browse items should be skimmed but do not need deep reading unless you want to.
- 📚 Additional Resources are optional references for debates, final projects, and future use.