Date of Graduation

12-2023

Document Type

Dissertation

Degree Name

Doctor of Philosophy in Engineering (PhD)

Degree Level

Graduate

Department

Computer Science & Computer Engineering

Advisor/Mentor

Lu Zhang

Committee Member

Wu, Xintao

Second Committee Member

Liu, Xiao

Third Committee Member

Gauch, Susan

Keywords

Causal Inference, Classification, Intervention, Long-term Fairness, Machine Learning

Abstract

With the development of artificial intelligence, automated decision-making systems are increasingly integrated into various applications, such as hiring, loans, education, recommendation systems, and more. These machine learning algorithms are expected to facilitate faster, more accurate, and impartial decision-making compared to human judgments. Nevertheless, these expectations are not always met in practice due to biased training data, leading to discriminatory outcomes. In contemporary society, countering discrimination has become a consensus among people, leading the EU and the US to enact laws and regulations that prohibit discrimination based on factors such as gender, age, race, and religion. Consequently, addressing algorithmic discrimination has garnered considerable attention, emerging as a crucial research area. To tackle this challenge, association-based fairness notions are proposed based on two legal doctrines of disparate treatment and disparate impact. Subsequently, several causality-based fairness notions are introduced to provide a more comprehensive understanding of how sensitive attributes influence decisions. Moreover, researchers have devised a range of pre-process, in-process, and post-process fairness algorithms to adhere to the above fairness metrics. However, much of the literature on fair machine learning focuses on static or one-shot scenarios, whereas real-world automated decision systems often make sequential decisions within dynamic environments. Consequently, current fairness algorithms cannot be directly applied to dynamic settings to achieve long-term fairness. In this dissertation, we investigate how to achieve long-term fairness in sequential decision making by addressing the issue of distribution shift, defining appropriate long-term fairness notion, and designing different fairness algorithms. Leveraging Pearl’s structural causal model, we view the deployment of each model as a soft intervention, enabling us to infer the post-intervention distribution and approximate the actual data distribution, thereby mitigating the problem of distribution shift. Additionally, we propose to measure indirect causal effects in time-lagged causal graphs as the causality-based long-term fairness. By integrating the aforementioned techniques, we introduce an algorithm that can concurrently learn multiple fairness models from a static dataset containing multi-step data. Furthermore, we convert traditional optimization into performative risk optimization, facilitating the training of a single model to achieve long-term fairness. Then, we design a three-phase deep generative framework where a single decision model is trained using high-fidelity generated time series data, significantly enhancing the performance of the decision model. Finally, we extend our focus to Markov decision processes, formulating a novel reinforcement learning algorithm that can effectively achieve both long-term and short-term fairness simultaneously.

Share

COinS