Dr. Olga Russakovsky on Fairness in Visual Recognition
Title: Fairness in Visual Recognition: Redesigning the Datasets, Improving the Models and Diversifying the AI Leadership
Content:
00:00 - Introduction
10:20 - REVICE: REvealing Visual biaSEs tool
18:57 - Building fair AI models: Fair attribute classification through latent space de-biasing
42:04 - AI decision makers
46:53 - Q&A
Abstract: Computer vision models trained on unparalleled amounts of data have revolutionized many applications. However, more and more historical societal biases are making their way into these seemingly innocuous systems. We focus our attention on two types of biases: (1) bias in the form of inappropriate correlations between protected attributes (age, gender expression, skin color, ...) and the predictions of visual recognition models, as well as (2) bias in the form of unintended discrepancies in error rates of vision systems across different social, demographic or cultural groups. In this talk, I’ll dive deeper both into the technical causes and
1 view
35
6
2 weeks ago 00:08:05 1
Мясо сочное, тает во рту. В духовке, как шашлык, не хватает дымка.