ML is very impactful.
And some of these ML algorithms have made decisions that are unfairly biased against certain subpopulations - particular race or gender for example.
Since the data may be biased, ML models must be trained to account for this to avoid creating discriminatory practices.
This paper develops a framework for modelling fairness using causal models(we’ll get onto) and counterfactual fairness.
What is counterfactual fairness?
Counterfactual fairness - a decision is fair to an individual if it’s the same in the:
The paper demonstrates this using a law school example.
Motivation