Achieving Non-Discrimination in Prediction

Document Type

Conference Proceeding

Publication Date

2018

Keywords

Machine Learning, Classification, Multidisciplinary Topics and Applications, Philosophical and Ethical Issues, Humans and AI, Ethical Issues in AI

Abstract

In discrimination-aware classification, the pre-process methods for constructing a discrimination-free classifier first remove discrimination from the training data, and then learn the classifier from the cleaned data. However, they lack a theoretical guarantee for the potential discrimination when the classifier is deployed for prediction. In this paper, we fill this gap by mathematically bounding the discrimination in prediction. We adopt the causal model for modeling the data generation mechanism, and formally defining discrimination in population, in a dataset, and in prediction. We obtain two important theoretical results: (1) the discrimination in prediction can still exist even if the discrimination in the training data is completely removed; and (2) not all pre-process methods can ensure non-discrimination in prediction even though they can achieve non-discrimination in the modified training data. Based on the results, we develop a two-phase framework for constructing a discrimination-free classifier with a theoretical guarantee. The experiments demonstrate the theoretical results and show the effectiveness of our two-phase framework.

Comments

Principal Investigator: Xintao Wu

Acknowledgements: This work was supported in part by NSF 1646654.

This document is currently not available here.

Share

COinS