Federated Learning for Privacy-Preserving Artificial Intelligence in Healthcare Systems
Main Article Content
Abstract
This paper explores how Federated Learning (FL) systems can be strengthened through the integration of Differential Privacy (DP). While FL allows multiple clients to collaboratively train a shared model without exposing raw data, model updates exchanged during training may still leak sensitive information. To address this, DP is applied using gradient clipping and Gaussian noise addition, thereby reducing the risk of privacy breaches. The study employs the Fed Avg algorithm in simulation experiments with ten clients under three noise levels (σ = 0.0, 0.5, 1.0), evaluating outcomes in terms of accuracy, log loss, and an illustrative Rényi-DP privacy budget (ε). Results highlight the trade-off between privacy and utility: models without noise achieve the highest accuracy but weakest privacy, moderate noise provides balanced performance, and stronger noise enhances privacy at the expense of accuracy. The findings emphasize the importance of tuning parameters such as clipping norm, noise multiplier, communication rounds, and participation rate to balance formal privacy protection with model utility. The study concludes by recommending standardized privacy accounting, randomized client participation, and task-specific parameter tuning as essential practices for securely deploying FL in sensitive domains such as healthcare, finance, and the Internet of Things.