Abstract

ML is very impactful.

And some of these ML algorithms have made decisions that are unfairly biased against certain subpopulations - particular race or gender for example.

Since the data may be biased, ML models must be trained to account for this to avoid creating discriminatory practices.

This paper develops a framework for modelling fairness using causal models(we’ll get onto) and counterfactual fairness.

What is counterfactual fairness?

Counterfactual fairness - a decision is fair to an individual if it’s the same in the:

The paper demonstrates this using a law school example.

Paper Layout

  1. What is the problem and motivation
  2. Basic concepts of causal models and fairness
  3. Formal definition of counterfactual fairness
  4. Algorithmic implementation and how it’s different to existing methods
  5. Algorithm illustrated with an example
  6. Conclusion

(1) Contribution

Motivation