Information Leakage Mitigation to Protect the Convolutional Neural Networks Against the Remote Side-Channel Analysis

Document Type : Research Article

Authors

Faculty of Computer Science and Engineering, Shahid Beheshti University, Tehran, Iran

10.22042/isecure.2026.243620
Abstract
Machine learning systems, despite exhibiting high inference accuracy in practical applications, are susceptible to security and reliability concerns both during the training phase and the inference phase. In this paper, we have demonstrated that it is possible to extract internal information from a neural network without physical access. This attack was executed through the utilization of a power sensor. This sensor enables remote sampling. Thus far, the sensor has been employed to extract power samples from cryptographic circuits, and its functionality and correctness have been thoroughly tested. Now, in this paper, the same power sensor is used to extract power samples from a neural network, allowing us to assess the supervisor’s performance for applications beyond cryptographic algorithms. In this paper, we demonstrate that the power sensor accurately extracts power samples from neural networks. This paper reveals that between 20,000 and 50,000 power samples of a 16-bit neural network weight can be retrieved. The final step involved hardening the neural network against side-channel attacks. Test results in this section demonstrate that it is possible to make the neural network resistant to first-order side-channel attacks with an area overhead of about 6%. The degree of reinforcement was measured using the assumption test method, revealing that the attack has become eight times more challenging.

Keywords


[1] L Batina, S. Bhasin, Dirmanto Jap, and S. Picek, “{CSI} {NN}: Reverse Engineering of Neural Network Architectures Through Electromagnetic Side Channel,” USENIX Security Symposium, pp. 515–532, Jan. 2019.
[2] D. Mukhopadhyay and R. S. Chakraborty, Hardware Security. Chapman and Hall/CRC, 2014.
[3] M. Shafique et al., “Robust Machine Learning Systems: Challenges, Current Trends, Perspectives, and the Road Ahead,” IEEE Design & Test, pp. 1–1, 2020.
[4] F. Khalid, Muhammad Abdullah Hanif, S. Rehman, R. Ahmed, and M. Shafique, “TrISec: Training Data-Unaware Imperceptible Security Attacks on Deep Neural Networks,” 2019 IEEE 25th International Symposium on On-Line Testing and Robust System Design (IOLTS), 188–93. Rhodes, Greece: IEEE, 2019.
[5] Y. Liu et al., “Trojaning Attack on Neural Networks,” Proceedings 2018 Network and Distributed System Security Symposium, 2018.
[6] W. Hua, Z. Zhang, and G. Edward Suh, “Reverse engineering convolutional neural networks through side-channel information leaks,” Proc. Des. Autom. Conf., vol. Part F1377, 2018.
[7] J. Breier, Dirmanto Jap, X. Hou, S. Bhasin, and Y. Liu, “SNIFF: Reverse Engineering of Neural Networks With Fault Attacks,” IEEE transactions on reliability, vol. 71, no. 4, pp. 1527–1539, Dec. 2022.
[8] A. Dubey, R. Cammarota, and A. Aysu, “MaskedNet: The First Hardware Inference Engine Aiming Power Side-Channel Protection,” IEEE Xplore, Available on ieeexplore.ieee.org/abstract/document/9300276, Accessed 2023.
[9] H. Naghibijouybari, A. Neupane, Z. Qian, and N. Abu-Ghazaleh, “Rendered Insecure,” Proceedings of the 2018 ACM SIGSAC Conference on Computer and Communications Security, Jan. 2018.
[10] M. Yan, C. Fletcher, and J. Torrellas, “Cache Telepathy: Leveraging Shared Resource Attacks to Learn DNN Architectures,” arXiv.org, Aug. 14, 2018. http://arxiv.org/abs/1808.04761 (accessed Jul. 12, 2024).
[11] K. Yoshida, T. Kubota, M. Shiozaki, and T. Fujino, “Model-Extraction Attack Against FPGADNN Accelerator Utilizing Correlation Electromagnetic Analysis,” Proc. - 27th IEEE Int. Symp. Field-Programmable Cust. Comput. Mach. FCCM 2019, vol. 2018, no. 4, p. 318, 2019.
[12] F. Schellenberg, D. R. E. Gnad, A. Moradi, and M. B. Tahoori, “An inside job: Remote power analysis attacks on FPGAs,” 2018 Design, Automation & Test in Europe Conference & Exhibition (DATE), Mar. 2018.
[13] A. Alexandre, Y. Souissi, S. Guilley, and J.L. Danger, “RSM: a Small and Fast Countermeasure for AES, Secure against 1st and 2ndorder Zero-Offset SCAs,” Design Automation and Test in Europe, Mar 2012, Dresden, Germany. pp.1173-1178. hal-00666337.
[14] E. Brier, C. Clavier, and F. Olivier, “Correlation Power Analysis with a Leakage Model,” Lecture Notes in Computer Science, pp. 16–29, 2004.
[15] D. Guo, K. Chen, X. Hu, Y. Wei, and J. Li, “A Survey of Prototype Side-channel Attacks Based on Machine Learning Algorithms for Cryptographic Chips,” Journal of physics. Conference series, vol. 1176, pp. 032005–032005, Mar. 2019.
[16] I. Goodfellow, Y Bengio, and A. Courville. Deep learning. MIT press, 2016. http://www.deeplearningbook.org
[17] Tom M. Mitchell, Machine Learning, 1 ed. McGraw-Hill, Inc., New York, NY, USA, 1997.
[18] S. Mangard, T. Popp, and B. Gammel, “SideChannel Leakage of Masked CMOS Gates,” pp. 351–365, Feb. 2005.
[19] Milad Salimian, and Ali Jahanian. “Intensive Analysis of Physical Parameters of Power Sensors for Remote Side-Channel Attacks.” Isecure., vol. 13, no. 2, 1 July 2021, pp. 163–176.

Articles in Press, Accepted Manuscript
Available Online from 15 May 2026