Week 13: Bias and Fairness
Learning Objectives
By the end of this week, students will be able to:
- Define multiple formal notions of algorithmic fairness and explain their tensions
- Measure group-level disparities in model performance using real datasets
- Identify sources of bias (data, labels, feedback loops, proxy features) in ML pipelines
- Evaluate proposed fairness interventions and articulate their tradeoffs
Perspectival Reading
Reading: TBD — e.g., Barocas et al. “Fairness and Machine Learning”
Reflection Questions
- Several mathematical definitions of fairness are provably incompatible. What does that imply for the claim that a model can be made “fair”?
- Who is harmed when an ML system is unfair, and who has the power to change it?
- Is fairness-aware ML a technical fix to a social problem? What is gained and lost by framing it that way?
Slides
Notebook Demo
Open in Google Colab (link TBD)
Lab Assignment
Week 13 Lab — GitHub Classroom (link TBD)