Document Type : Research Article

Authors

1 Information Systems and Security Lab, EE Department, Sharif University of Technology, Tehran, Iran

2 Electronics Research Institute, Sharif University of Technology, Tehran, Iran

Abstract

Large-scale data collection is challenging in alternative centralized learning as privacy concerns or prohibitive policies may rise. As a solution, Federated Learning (FL) is proposed wherein data owners, called participants, can train a common model collaboratively while their privacy is preserved. However, recent attacks, namely Membership Inference Attacks (MIA) or Poisoning Attacks (PA), can threaten the privacy and performance in FL systems. This paper develops an innovative Adversarial-Resilient Privacy-preserving Scheme (ARPS) for FL to cope with preceding threats using differential privacy and
cryptography. Our experiments display that ARPS can establish a private model with high accuracy out‌performing state-of-the-art approaches. To the best of our knowledge, this work is the only scheme providing privacy protection beyond any output models in conjunction with Byzantine resiliency without sacrificing accuracy and efficiency.

Keywords

[1] Brendan McMahan, Eider Moore, Daniel Ramage, Seth Hampson, and Blaise Aguera y Arcas. Communication-efficient learning of deep networks from decentralized data. In Artificial intelligence and statistics, pages 1273–1282. PMLR, 2017.
[2] Reza Shokri, Marco Stronati, Congzheng Song, and Vitaly Shmatikov. Membership inference attacks against machine learning models. In 2017 IEEE Symposium on Security and Privacy, SP 2017, San Jose, CA, USA, May 22-26, 2017, pages 3–18. IEEE Computer Society, 2017.
[3] Kang Wei, Jun Li, Ming Ding, Chuan Ma, Howard H Yang, Farhad Farokhi, Shi Jin, Tony QS Quek, and H Vincent Poor. Federated learning with differential privacy: Algorithms and performance analysis. IEEE Transactions on Information Forensics and Security, 15:3454–3469, 2020.
[4] Minghong Fang, Xiaoyu Cao, Jinyuan Jia, and Neil Gong. Local model poisoning attacks to {Byzantine-Robust} federated learning. In 29th USENIX security symposium (USENIX Security 20), pages 1605–1622, 2020.
[5] Jinhyun So, Ba¸sak G¨uler, and A Salman Avestimehr. Byzantine-resilient secure federated learning. IEEE Journal on Selected Areas in Communications, 39(7):2168–2181, 2020.
[6] Peva Blanchard, El Mahdi El Mhamdi, Rachid Guerraoui, and Julien Stainer. Machine learning with adversaries: Byzantine tolerant gradient descent. Advances in neural information processing systems, 30, 2017.
[7] Dong Yin, Yudong Chen, Ramchandran Kannan, and Peter Bartlett. Byzantine-robust distributed learning: Towards optimal statistical rates. In International Conference on Machine Learning, pages 5650–5659. PMLR, 2018.
[8] Yinbin Miao, Ziteng Liu, Hongwei Li, Kim-Kwang Raymond Choo, and Robert H Deng. Privacy-preserving byzantine-robust federated learning via blockchain systems. IEEE Transactions on Information Forensics and Security, 17:2848–2861, 2022.
[9] Sana Awan, Bo Luo, and Fengjun Li. Contra: Defending against poisoning attacks in federated learning. In Computer Security–ESORICS 2021: 26th European Symposium on Research in Computer Security, Darmstadt, Germany, October
4–8, 2021, Proceedings, Part I 26, pages 455–475. Springer, 2021.
[10] Sebastien Andreina, Giorgia Azzurra Marson, Helen M¨ollering, and Ghassan Karame. Baffle: Backdoor detection via feedback-based federated learning. In 2021 IEEE 41st International Conference on Distributed Computing Systems (ICDCS), pages 852–863. IEEE, 2021.
[11] Xu Ma, Yuqing Zhou, Laihua Wang, and Meixia Miao. Privacy-preserving byzantine-robust federated learning. Computer Standards & Interfaces, 80:103561, 2022.
[12] Yuan-Cheng Lai, Jheng-Yan Lin, Ying-Dar Lin, Ren-Hung Hwang, Po-Chin Lin, Hsiao-Kuang Wu, and Chung-Kuan Chen. Two-phase defense against poisoning attacks on federated learning-based intrusion detection. Computers & Security,
129:103205, 2023.
[13] Ashneet Khandpur Singh, Alberto Blanco-Justicia, and Josep Domingo-Ferrer. Fair detection of poisoning attacks in federated learning on non-iid data. Data Mining and Knowledge Discovery, pages 1–26, 2023.
[14] Keith Bonawitz, Vladimir Ivanov, Ben Kreuter, Antonio Marcedone, H Brendan McMahan, Sarvar Patel, Daniel Ramage, Aaron Segal, and Karn Seth. Practical secure aggregation for privacy-preserving machine learning. In proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security, pages 1175–1191, 2017.
[15] Stacey Truex, Nathalie Baracaldo, Ali Anwar, Thomas Steinke, Heiko Ludwig, Rui Zhang, and Yi Zhou. A hybrid approach to privacy-preserving federated learning. In Proceedings of the 12th ACM workshop on artificial intelligence and security, pages 1–11, 2019.
[16] Runhua Xu, Nathalie Baracaldo, Yi Zhou, Ali Anwar, and Heiko Ludwig. Hybridalpha: An efficient approach for privacy-preserving federated learning. In Proceedings of the 12th ACM workshop on artificial intelligence and security, pages 13–23, 2019.
[17] Robin C Geyer, Tassilo Klein, and Moin Nabi. Differentially private federated learning: A client level perspective. arXiv preprint arXiv:1712.07557, 2017.
[18] Cynthia Dwork, Aaron Roth, et al. The algorithmic foundations of differential privacy. Foundations and Trends® in Theoretical Computer Science, 9(3–4):211–407, 2014.
[19] Pascal Paillier. Public-key cryptosystems based on composite degree residuosity classes. In International conference on the theory and applications of cryptographic techniques, pages 223–238. Springer, 1999.
[20] Martin Abadi, Andy Chu, Ian Goodfellow, H Brendan McMahan, Ilya Mironov, Kunal Talwar, and Li Zhang. Deep learning with differential privacy. In Proceedings of the 2016 ACM SIGSAC conference on computer and communications security, pages 308–318, 2016.
[21] Jinhyun So, Ba¸sak G¨uler, and A Salman Avestimehr. Turbo-aggregate: Breaking the quadratic aggregation barrier in secure federated learning. IEEE Journal on Selected Areas in Information Theory, 2(1):479–489, 2021.