Date of Graduation
5-2025
Document Type
Dissertation
Degree Name
Doctor of Philosophy in Engineering (PhD)
Degree Level
Graduate
Department
Electrical Engineering and Computer Science
Advisor/Mentor
Wu, Xintao
Committee Member
Zhang, Lu
Second Committee Member
Pan, Yanjun
Third Committee Member
Petris, Giovanni G.
Keywords
Fairness; Federated Learning; Privacy; Trustworthy Machine Learning
Abstract
Having access to large, high-quality datasets is crucial for training machine learning models that achieve satisfactory performance. Unfortunately, it is common that a single entity (e.g., mobile device or organization) does not have access to such datasets due to monetary or resource constraints. Traditional machine learning requires that all training data reside in a centralized location during the entire duration of model training, however, in many circumstances it is difficult or even impossible (e.g., due to governmental regulations) for multiple parties to combine their data to meet this constraint. Federated learning is a machine learning paradigm that facilitates the joint training of a model by multiple parties under the organization of a central server without the parties explicitly having to share their private local data. Due to its potential for solving challenges in domains such as IoT and healthcare, federated learning research has received significant interest especially in the areas of efficiency and optimization. Additionally, due to rising demands for increased privacy protections and fairness, research into trustworthy federated learning has grown in popularity. Despite progress over the past few years, several challenges still remain as open problems in relation to trustworthy federated learning.
The goal of this dissertation is to address a major pitfall of current trustworthy federated learning publications -- flexibility. Often in proposed fair and/or private federated learning works, all clients are required to use the same fairness metric or privacy level which over-constrains the federation participants. In this dissertation, we propose to solve overlooked problems in trustworthy federated learning, such as: 1) how can clients independently choose a demographic fairness metric to enforce locally without greatly affecting the global model's final accuracy, 2) can an optimization function for federated learning be devised such that it can achieve multiple different client-level fairness definitions, and 3) is it possible to devise a federated learning algorithm that allows clients to update their local privacy level as needed without requiring full model retraining? To solve these questions, in this dissertation we performed the following:
1) We developed Fair HyperNetworks (FHN) -- a personalized federated learning algorithm based on hypernetworks that allows each client in the federation to choose which type of demographic fairness to enforce on their local model. FHN ensures that different demographic groups receive fair treatment while maintaining high accuracy for each client.
2) We proposed Uncertainty-based Distributive Justice for Federated Learning (UDJ-FL), a flexible federated learning framework that can achieve multiple distributive justice-based client-level fairness metrics. UDJ-FL, by utilizing techniques inspired by fair resource allocation, in conjunction with performing aleatoric uncertainty-based client weighing, is able to achieve egalitarian, utilitarian, Rawls’ difference principle, or desert-based client-level fairness.
3) We constructed Flexible Local Differential Privacy for Federated Learning (FLDP-FL), a flexible architecture for federated learning based on influence functions that allows clients to change their privacy guarantees post-training without requiring the entire federated learning process to be repeated.
Citation
Carey, A. N. (2025). Achieving Flexible Fairness and Privacy in Federated Learning. Graduate Theses and Dissertations Retrieved from https://scholarworks.uark.edu/etd/5660