Zlate Dodevski, Tanja Pavleska and Vladimir Trajkovikj
Abstract
Federated learning (FL) represents a pivotal
advancement in applying Machine Learning (ML) in
healthcare. It addresses the challenges of data privacy and
security by facilitating model transferability across institutions.
This paper explores the effective employment of FL to enhance
the deployment of large language models (LLMs) in healthcare
settings while maintaining stringent privacy standards.
Along a detailed examination of the challenges in applying
LLMs to the healthcare domain, including privacy, security,
regulatory constraints, and training data quality, we present a
federated learning architecture tailored for LLMs in healthcare.
This architecture outlines the roles and responsibilities of
participating entities, providing a framework for secure
collaboration. We further analyze privacy-preserving
techniques such as differential privacy and secure aggregation
in the context of federated LLMs for healthcare, offering
insights into their practical implementation.
Our findings suggest that federated learning is a viable
choice for enhancing he capabilities of LLMs in healthcare while
preserving patient privacy. In addition, we also identify
persistent challenges in areas such as computational and
communicational efficiency, lack of benchmarks and tailored
FL aggregation algorithms applied to LLMs, model
performance, and ethical concerns in participant selection. By
critically evaluating the proposed approach and highlighting its
potential benefits and limitations in real-world healthcare
settings, this work provides a foundation for future research in
secure and privacy-preserving ML deployment in healthcare.