
Securing Deep Neural Networks: A Life-Cycle Perspective On Trojan Attacks And Defensive Measures
Abstract
As Deep Neural Networks (DNNs) become increasingly integrated into critical systems—from healthcare diagnostics to autonomous vehicles—their vulnerability to malicious attacks has emerged as a serious security concern. Among these threats, Trojan attacks pose a unique risk by embedding hidden triggers during training that activate malicious behavior during inference. This paper presents a comprehensive life-cycle perspective on the security of DNNs, examining vulnerabilities across model development, training, deployment, and maintenance stages. We systematically categorize Trojan attack vectors, analyze real-world case studies, and evaluate the efficacy of current defense mechanisms, including pruning, fine-tuning, input filtering, and model certification. Furthermore, we propose a proactive framework for embedding security at each stage of the DNN life cycle, aiming to guide researchers and developers toward more resilient AI systems. Our findings highlight the importance of integrating security as a design principle rather than a reactive afterthought.
Keywords
Deep Neural Networks, Trojan Attacks, AI Security, Machine Learning Lifecycle
References
Article Statistics
Downloads
Copyright License
Authors retain the copyright of their manuscripts, and all Open Access articles are disseminated under the terms of the Creative Commons Attribution License 4.0 (CC-BY), which licenses unrestricted use, distribution, and reproduction in any medium, provided that the original work is appropriately cited. The use of general descriptive names, trade names, trademarks, and so forth in this publication, even if not specifically identified, does not imply that these names are not protected by the relevant laws and regulations.