Part 3: Practical Tasks With AllenNLP

Fairness and Bias Mitigation

Author: Arjun Subramonian

A practical guide into the AllenNLP Fairness module.

As models and datasets become increasingly large and complex, it is critical to evaluate the fairness of models according to multiple definitions of fairness and mitigate biases in learned representations. allennlp.fairness aims to make fairness metrics, fairness training tools, and bias mitigation algorithms extremely easy to use and accessible to researchers and practitioners of all levels.

We hope allennlp.fairness empowers everyone in NLP to combat algorithmic bias, that is, the "unjust, unfair, or prejudicial treatment of people related to race, income, sexual orientation, religion, gender, and other characteristics historically associated with discrimination and marginalization, when and where they manifest in algorithmic systems or algorithmically aided decision-making" (Chang et al. 2019). Ultimately, "people who are the most marginalized, people who’d benefit the most from such technology, are also the ones who are more likely to be systematically excluded from this technology" because of algorithmic bias (Chang et al. 2019).

1Why do we need fairness and bias mitigation tools?

2An Overview of Fairness Metrics

3An Overview of Bias Mitigation and Bias Direction Methods

4An Overview of Bias Metrics

5Applying Bias Mitigation to Large, Contextual Language Models

6Next Steps