The Hidden Complexity of Wishes

This video is about AI Alignment. At the moment, humanity has no idea how to make AIs follow complex goals that track human values. This video introduces a series focused on what is sometimes called “the outer alignment problem“. In future videos, we’ll explore how this problem affects machine learning systems today and how it could lead to catastrophic outcomes for humanity. The text of this video has been slightly adapted from an original article written by Eliezer Yudkowsky. You can read the original article here: If you’d like to skill up on AI Safety, we highly recommend the AI Safety Fundamentals courses by BlueDot Impact at You can find three courses: AI Alignment, AI Governance, and AI Alignment 201 You can follow AI Alignment and AI Governance even without a technical background in AI. AI Alignment 201, instead, presupposes having followed the AI Alignment course first, and equivalent knowledge as having followed university-level courses on deep learning and reinforcement learning. The courses consist of a selection of readings curated by experts in AI safety. They are available to all, so you can simply read them if you can’t formally enroll in the courses. If you want to participate in the courses instead of just going through the readings by yourself, BlueDot Impact runs live courses which you can apply to. The courses are remote and free of charge. They consist of a few hours of effort per week to go through the readings, plus a weekly call with a facilitator and a group of people learning from the same material. At the end of each course, you can complete a personal project, which may help you kickstart your career in AI Safety. BlueDot impact receives more applications that they can take, so if you’d still like to follow the courses alongside other people you can go to the #study-buddy channel in the AI Alignment Slack. You can join by clicking on the first entry on You could also join Rational Animations’ Discord server at , and see if anyone is up to be your partner in learning. ▀▀▀▀▀▀▀▀▀PATREON, MEMBERSHIP, KO-FI▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀ 🟠 Patreon: 🔵 Channel membership: 🟤 Ko-fi, for one-time and recurring donations: ▀▀▀▀▀▀▀▀▀PATRONS & MEMBERS▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀ Shrimant RMR Kristin Lindquist Nathan Metzger Monadologist Glenn Tarigan NMS James Babcock Colin Ricardo Long Hoang Tor Barstad Gayman Crothers Stuart Alldritt Ville Ikäläinen Chris Painter Juan Benet Falcon Scientist Jeff Christian Loomis Tomarty Edward Yu Ahmed Elsayyad Chad M Jones Emmanuel Fredenrich Honyopenyoko Neal Strobl bparro Danealor Craig Falls Aaron Camacho Vincent Weisser Alex Hall Ivan Bachcin joe39504589 Klemen Slavic Scott Alexander noggieB Dawson John Slape Gabriel Ledung Jeroen De Dauw Craig Ludington Jacob Van Buren Superslowmojoe Nicholas Kees Dupuis Michael Zimmermann Nathan Fish Ryouta Takehiko Bleys Goodson Ducky Bryan Egan Matt Parlmer Tim Duffy rictic Mark Gongloff marverati Luke Freeman Dan Wahl Rey Carroll Alcher Black Harold Godsoe William Clelland ronvil AWyattLife codeadict Lazy Scholar Torstein Haldorsen Supreme Reader Michał Zieliński ▀▀▀▀▀▀▀CREDITS▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀ Animation director: Hannah Levingstone Writer: Eliezer Yudkowsky Editor and producer: :3 Line Producer and production manager: Kristy Steffens Quality Assurance Lead: Lara Robinowitz Animation: Michela Biancini Owen Peurois Zack Gilbert Jordan Gilbert Keith Kavanagh Damon Edgson Neda Lay Colors Giraldo Renan Kogut Background Art: Hané Harnett Zoe Martin-Parkinson Olivia Wang Compositing: Renan Kogut Patrick O’Callaghan Ira Klages Voices: Robert Miles - Narrator VO Editing: Tony Di Piazza Sound Design and Music: Epic Mountain
Back to Top