Federated brain tumor segmentation with patient-level local differential privacy
Obradovic, Joni (2025)
Obradovic, Joni
2025
Julkaisun pysyvä osoite on
https://urn.fi/URN:NBN:fi:amk-202504298126
https://urn.fi/URN:NBN:fi:amk-202504298126
Tiivistelmä
Federated Learning (FL) facilitates collaborative training of machine learning models across distributed datasets held privately by multiple institutions, such as hospitals with sensitive medical imaging data, without centralized sharing of raw patient data.
To further strengthen patient confidentiality, Local Differential Privacy (LDP) introduces controlled, privacy-preserving noise directly into the model's gradients locally at each participating institution, before their aggregation into a global model. This approach is especially relevant in healthcare, where safeguarding patient data is both ethically and legally mandated.
This thesis investigates and implements local differential privacy techniques within a federated learning framework, specifically applied to brain tumor segmentation using medical imaging. The primary goal was to enhance patientlevel privacy protections while minimizing degradation in segmentation performance.
A 3D Residual U-Net model was trained under LDP conditions by adding Gaussian noise locally to gradients before central aggregation. Essential differential privacy parameters, including noise magnitude, gradient clipping thresholds, and local training hyperparameters, were explored to achieve a favorable balance between privacy and segmentation accuracy.
Experimental results indicated a slight decrease in segmentation performance, assessed using standard medical imaging metrics (Dice Similarity Coefficient and Hausdorff95 Distance), when incorporating local differential privacy.
However, it was demonstrated that this performance loss could be substantially mitigated by careful tuning of privacy parameters. Ultimately, the study confirms the practical feasibility of local differential privacy in federated medical imaging applications, highlighting that robust patient-level data protection can be effectively implemented with moderate computational overhead and targeted optimization efforts.
Future work could further refine this balance through advanced hyperparameter optimization techniques and alternative noise addition strategies, expanding the potential for secure and privacy-conscious collaborative machine learning in medical imaging contexts.
To further strengthen patient confidentiality, Local Differential Privacy (LDP) introduces controlled, privacy-preserving noise directly into the model's gradients locally at each participating institution, before their aggregation into a global model. This approach is especially relevant in healthcare, where safeguarding patient data is both ethically and legally mandated.
This thesis investigates and implements local differential privacy techniques within a federated learning framework, specifically applied to brain tumor segmentation using medical imaging. The primary goal was to enhance patientlevel privacy protections while minimizing degradation in segmentation performance.
A 3D Residual U-Net model was trained under LDP conditions by adding Gaussian noise locally to gradients before central aggregation. Essential differential privacy parameters, including noise magnitude, gradient clipping thresholds, and local training hyperparameters, were explored to achieve a favorable balance between privacy and segmentation accuracy.
Experimental results indicated a slight decrease in segmentation performance, assessed using standard medical imaging metrics (Dice Similarity Coefficient and Hausdorff95 Distance), when incorporating local differential privacy.
However, it was demonstrated that this performance loss could be substantially mitigated by careful tuning of privacy parameters. Ultimately, the study confirms the practical feasibility of local differential privacy in federated medical imaging applications, highlighting that robust patient-level data protection can be effectively implemented with moderate computational overhead and targeted optimization efforts.
Future work could further refine this balance through advanced hyperparameter optimization techniques and alternative noise addition strategies, expanding the potential for secure and privacy-conscious collaborative machine learning in medical imaging contexts.
