Deshwal, Kuldeep (2025) Understanding data heterogeneity in federated learning. World Journal of Advanced Engineering Technology and Sciences, 15 (2). pp. 530-540. ISSN 2582-8266
![WJAETS-2025-0523.pdf [thumbnail of WJAETS-2025-0523.pdf]](https://eprint.scholarsrepository.com/style/images/fileicons/text.png)
WJAETS-2025-0523.pdf - Published Version
Available under License Creative Commons Attribution Non-commercial Share Alike.
Abstract
Federated learning enables machine learning across distributed devices without centralizing sensitive data, preserving privacy while creating intelligent systems from collective knowledge. Data heterogeneity, the natural variation in information across participating devices, presents significant challenges including convergence instability, model bias, communication inefficiency, privacy-utility tradeoffs, and computational imbalance. Despite these obstacles, heterogeneity offers advantages like improved model generalization, personalization opportunities, greater real-world applicability, enhanced privacy protection, and better fault tolerance when properly managed. Current solutions address these challenges through personalized federated learning, robust aggregation methods, federated distillation, client clustering, and adaptive participation strategies, while future directions focus on developing advanced heterogeneity metrics, cross-organizational techniques, dynamic adaptation mechanisms, hardware-aware algorithms, theoretical foundations, and standardized benchmarks to further enhance performance in diverse data environments
Item Type: | Article |
---|---|
Official URL: | https://doi.org/10.30574/wjaets.2025.15.2.0523 |
Uncontrolled Keywords: | Adaptation; Decentralization; Heterogeneity; Personalization; Privacy |
Depositing User: | Editor Engineering Section |
Date Deposited: | 04 Aug 2025 16:26 |
Related URLs: | |
URI: | https://eprint.scholarsrepository.com/id/eprint/3500 |