Abstract
Federated Learning presents a way to revolutionize AI applications by eliminating the necessity for data sharing. Yet, research has shown that information can still be extracted during training, making additional privacy-preserving measures such as differential privacy imperative. To implement real-world federated learning applications, fairness, ranging from a fair distribution of performance to non-discriminative behavior, must be considered. Particularly in high-risk applications (e.g. healthcare), avoiding the repetition of past discriminatory errors is paramount. As recent research has demonstrated an inherent tension between privacy and fairness, we conduct a multivocal literature review to examine the current methods to integrate privacy and fairness in federated learning. Our analyses illustrate that the relationship between privacy and fairness has been neglected, posing a critical risk for real-world applications. We highlight the need to explore the relationship between privacy, fairness, and performance, advocating for the creation of integrated federated learning frameworks.
Recommended Citation
Balbierer, Beatrice; Heinlein, Lukas; Zipperling, Domenique; and Kühl, Niklas, "A Multivocal Literature Review on Privacy and Fairness in Federated Learning" (2024). Wirtschaftsinformatik 2024 Proceedings. 14.
https://aisel.aisnet.org/wi2024/14