Open Access

Securing Deep Neural Networks: A Life-Cycle Perspective On Trojan Attacks And Defensive Measures

4 Center for Artificial Intelligence and Cybersecurity, University of California, San Diego, USA
4 Faculty of Electrical and Computer Engineering, University of Porto, Porto, Portugal

Abstract

As Deep Neural Networks (DNNs) become increasingly integrated into critical systems—from healthcare diagnostics to autonomous vehicles—their vulnerability to malicious attacks has emerged as a serious security concern. Among these threats, Trojan attacks pose a unique risk by embedding hidden triggers during training that activate malicious behavior during inference. This paper presents a comprehensive life-cycle perspective on the security of DNNs, examining vulnerabilities across model development, training, deployment, and maintenance stages. We systematically categorize Trojan attack vectors, analyze real-world case studies, and evaluate the efficacy of current defense mechanisms, including pruning, fine-tuning, input filtering, and model certification. Furthermore, we propose a proactive framework for embedding security at each stage of the DNN life cycle, aiming to guide researchers and developers toward more resilient AI systems. Our findings highlight the importance of integrating security as a design principle rather than a reactive afterthought.

Keywords

Similar Articles

31-37 of 37

You may also start an advanced similarity search for this article.