A Study on Federated Learning Techniques for Privacy Preservation
-
Published 2026-01-02
Federated Learning, Privacy Preservation, Distributed Machine Learning, Secure Aggregation, Differential Privacy, Data Security Issue
Section
ArticlesHow to Cite
[1]A. Mehta, “A Study on Federated Learning Techniques for Privacy Preservation”, IJADSMC, vol. 1, no. 1, pp. 01–13, Jan. 2026, Accessed: Mar. 02, 2026. [Online]. Available: https://worldcometresearchgroup.com/index.php/ijadsmc/article/view/29Abstract
The fast proliferation of data-driven applications and intelligent systems has raised problem awareness as far as the concerns regarding data privacy, data security, and regulatory compliance are concerned to a very high level. The conventional centralized machine learning models demand the coalescence of raw data situated in disseminated sources, which is a grave threat of information leaks, unauthorized data accessibility, and breaking a privacy policy like GDPR and HIPAA. Federated Learning (FL) has become a hopeful decentralized learning framework so as to facilitate joint model training among various customers without relocating crude data to a central point. Rather, model updates of the local models are shared and aggregated, hence retaining locality of data and improving privacy. This paper will provide an in-depth analysis of federated learning methods and pay special attention to the issue of privacy. Its research paper investigates the very principles of federated learning, architecture designs, communication scheme, and ways of aggregation. The diverse privacy-enhancing schemes are secure aggregation, differential privacy, homomorphic encryption, trusted execution environments, and are critically analyzed. Moreover, this article examines new developments, concerns and tradeoffs connected with privacy, effectiveness of communication, model noise, and scalability of systems. There is a comparative analysis of federated learning methods presented in organized tabular form and mathematical equations. The existing studies are discussed with their experimental results in order to outline the effectiveness of federated learning to maintain the privacy and achieve the acceptable model accuracy. Lastly, the research issues and future paths are outlined in order to direct the further development of the privacy-preserving federated learning systems.
References
[1] McMahan, B., Moore, E., Ramage, D., Hampson, S., & y Arcas, B. A. (2017). Communication-efficient learning of deep networks from decentralized data. Proceedings of the 20th International Conference on Artificial Intelligence and Statistics (AISTATS).
[2] Kairouz, P., et al. (2021). Advances and open problems in federated learning. Foundations and Trends® in Machine Learning, 14(1–2), 1–210.
[3] Bonawitz, K., et al. (2017). Practical secure aggregation for privacy-preserving machine learning. Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security (CCS).
[4] Dwork, C., McSherry, F., Nissim, K., & Smith, A. (2006). Calibrating noise to sensitivity in private data analysis. Theory of Cryptography Conference (TCC).
[5] Dwork, C., & Roth, A. (2014). The algorithmic foundations of differential privacy. Foundations and Trends® in Theoretical Computer Science.
[6] Fredrikson, M., Jha, S., & Ristenpart, T. (2015). Model inversion attacks that exploit confidence information and basic countermeasures. Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security.
[7] Shokri, R., Stronati, M., Song, C., & Shmatikov, V. (2017). Membership inference attacks against machine learning models. IEEE Symposium on Security and Privacy.
[8] Zhu, L., Liu, Z., & Han, S. (2019). Deep leakage from gradients. Advances in Neural Information Processing Systems (NeurIPS).
[9] Geiping, J., Bauermeister, H., Drozdzal, M., & Moeller, M. (2020). Inverting gradients – how easy is it to break privacy in federated learning? Advances in Neural Information Processing Systems (NeurIPS).
[10] Geyer, R. C., Klein, T., & Nabi, M. (2017). Differentially private federated learning: A client level perspective. NIPS Workshop on Machine Learning on the Phone and other Consumer Devices.
[11] Abadi, M., et al. (2016). Deep learning with differential privacy. Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security.
[12] Truex, S., et al. (2019). A hybrid approach to privacy-preserving federated learning. Proceedings of the 12th ACM Workshop on Artificial Intelligence and Security.
[13] Acar, A., et al. (2018). A survey on homomorphic encryption schemes: Theory and implementation. ACM Computing Surveys, 51(4).
[14] Hardy, S., et al. (2017). Private federated learning on vertically partitioned data via entity resolution and additive secret sharing. Proceedings of the VLDB Endowment.
[15] Li, T., Sahu, A. K., Talwalkar, A., & Smith, V. (2020). Federated learning: Challenges, methods, and future directions. IEEE Signal Processing Magazine, 37(3), 50–60.
[16] Tirumalasetty, P. (2022). Coded Machine Unlearning using Machine Learning.
Downloads
- ga
How to Cite
[1]A. Mehta, “A Study on Federated Learning Techniques for Privacy Preservation”, IJADSMC, vol. 1, no. 1, pp. 01–13, Jan. 2026, Accessed: Mar. 02, 2026. [Online]. Available: https://worldcometresearchgroup.com/index.php/ijadsmc/article/view/29