Articles | Open Access |

Securing Deep Neural Networks: A Life-Cycle Perspective On Trojan Attacks And Defensive Measures

Abstract

As Deep Neural Networks (DNNs) become increasingly integrated into critical systems—from healthcare diagnostics to autonomous vehicles—their vulnerability to malicious attacks has emerged as a serious security concern. Among these threats, Trojan attacks pose a unique risk by embedding hidden triggers during training that activate malicious behavior during inference. This paper presents a comprehensive life-cycle perspective on the security of DNNs, examining vulnerabilities across model development, training, deployment, and maintenance stages. We systematically categorize Trojan attack vectors, analyze real-world case studies, and evaluate the efficacy of current defense mechanisms, including pruning, fine-tuning, input filtering, and model certification. Furthermore, we propose a proactive framework for embedding security at each stage of the DNN life cycle, aiming to guide researchers and developers toward more resilient AI systems. Our findings highlight the importance of integrating security as a design principle rather than a reactive afterthought.

Keywords

Deep Neural Networks, Trojan Attacks, AI Security, Machine Learning Lifecycle

References

Article Statistics

Downloads

Download data is not yet available.

Copyright License

Download Citations

How to Cite

Securing Deep Neural Networks: A Life-Cycle Perspective On Trojan Attacks And Defensive Measures. (2024). International Journal of Advanced Artificial Intelligence Research, 1(01), 26-31. https://aimjournals.com/index.php/ijaair/article/view/132