Tags:deep learning, federated learning, group fairness and medical imaging
Abstract:
Deep learning models have achieved great success in medical imaging tasks. However, recent work on fairness in healthcare has shown these models can be biased, potentially leading to discriminatory treatment of patients based on demographic attributes such as race, gender, and age. Data bias, often resulting from imbalanced and non-representative datasets, can negatively impact model fairness. While aggregating data from multiple sources can help mitigate data bias, privacy concerns make the data-sharing process challenging. In this scenario, Federated Learning (FL) has emerged as a solution for the collaborative training of models without data sharing. This paper presents FairMed-FL, a methodology to assess fairness in FL for medical imaging tasks. By utilizing two public chest X-ray datasets partitioned by sex and age, we compared federated models trained with clients from single and multiple datasets against centralized models trained on each client’s data. The results indicate that FL reduces performance discrepancies between demographic groups, enhances the performance of the worst-performing groups, and improves overall metrics compared to centralized approaches. These findings highlight its potential for promoting fairness in medical imaging.
FairMed-FL: Federated Learning for Fair and Unbiased Deep Learning in Medical Imaging