Algorithmic Bias and Fairness: Crash Course AI #18

Check out Jabril’s collab with “Above the Noise“ about Deepfakes: Today, we’re going to talk about five common types of algorithmic bias we should pay attention to: data that reflects existing biases, unbalanced classes in training data, data that doesn’t capture the right value, data that is amplified by feedback loops, and malicious data. Now bias itself isn’t necessarily a terrible thing, our brains often use it to take shortcuts by finding patterns, but bias can become a problem if we don’t acknowledge exceptions to patterns or if we allow it to discriminate. Crash Course is produced in association with PBS Digital Studios: Crash Course is on Patreon! You can support us directly by signing up at Thanks to the following patrons for their generous monthly contributions that help keep Crash Course free for everyone forever: Eric Prestemon, Sam Buck, Mark Brouwer, Efrain R. Pedroza, Matthew Curls, Indika Siriwardena, Avi Yashchin, Timothy J Kwist, Brian Thomas Gossett, Haixiang N/A Liu, Jonathan Zbikowski, Siobhan Sabino, Jennifer Killen, Nathan Catchings, Brandon Westmoreland, dorsey, Kenneth F Penttinen, Trevin Beattie, Erika & Alexa Saur, Justin Zingsheim, Jessica Wode, Tom Trval, Jason Saslow, Nathan Taylor, Khaled El Shalakany, SR Foxley, Yasenia Cruz, Eric Koslow, Caleb Weeks, Tim Curwick, DAVID NOE, Shawn Arnold, William McGraw, Andrei Krishkevich, Rachel Bright, Jirat, Ian Dundore -- Want to find Crash Course elsewhere on the internet? Facebook - Twitter - Tumblr - Support Crash Course on Patreon: CC Kids: #CrashCourse #ArtificialIntelligence #MachineLearning
Back to Top