International Journal of Cyber Threat Intelligence and Secure Networking

  1. Home
  2. Archives
  3. Vol. 2 No. 10 (2025): Volume 02 Issue 10
  4. Articles
International Journal of Cyber Threat Intelligence and Secure Networking

Article Details Page

Synergizing Generative AI and Explainable Machine Learning in Security Operations Centers: Mitigating Alert Fatigue and Enhancing Analyst Performance

Authors

  • Dr. Dmitry V. Sokolov Independent Researcher, Distributed Systems & Threat Intelligence, Novosibirsk, Russia

Keywords:

Security Operations Center (SOC), Generative AI, Explainable AI (XAI), Alert Fatigue

Abstract

The contemporary Security Operations Center (SOC) faces an existential crisis characterized by exponential data volume growth, sophisticated adversarial tactics, and a critical shortage of skilled personnel. This study investigates the integration of Generative Artificial Intelligence (GenAI) and Explainable Artificial Intelligence (XAI) to address the twin challenges of alert fatigue and decision-making latency. By employing a mixed-methods approach, we analyze the efficacy of a proposed "Hybrid-Intelligence SOC" framework against traditional Security Information and Event Management (SIEM) workflows. Our research leverages recent empirical data regarding GenAI’s impact on high-skilled labor and combines it with XAI-driven detection models for malicious domains and ransomware. We demonstrate that while traditional automation (SOAR) handles deterministic tasks, the introduction of GenAI "Copilots" significantly reduces the cognitive load associated with investigation and reporting, particularly for less experienced analysts. Furthermore, the integration of XAI provides necessary interpretability, fostering trust in automated alerts. The findings suggest that this synergistic approach is associated with a 40% reduction in Mean Time to Remediate (MTTR) and a substantial decrease in false positive triage time. We conclude by discussing the imperative of adversarial robustness and the economic implications of AI-assisted upskilling in the cybersecurity workforce.

 

References

Prassanna R Rajgopal. (2025). AI-optimized SOC playbook for Ransomware Investigation. International Journal of Data Science and Machine Learning, 5(02), 41-55. https://doi.org/10.55640/ijdsml-05-02-04

Alsheh, E. (2022, June 4). Creating a smarter SOC with the MITRE ATT&CK Framework. CyberProof. Retrieved from

https://blog.cyberproof.com/blog/creating-a-smarter- soc-withthe-mitre-attck-framework

ArcSight (n.d.). SIEM+SOAR for threats that matters. Retrieved from https://www.microfocus.com/en- us/cyberres/secops

Aslam, N., Khan, I. U., Mirza, S., AlOwayed, A., Anis,

F. M., Aljuaid, R. M., & Baageel, R. (2022). Interpretable machine learning models for malicious domains detection using explainable artificial intelligence (XAI). Sustainability, 14(12), 7375.

Ban, T., Ndichu, S., Takahashi, T., & Inoue, D. (2021). Combat security alert fatigue with AI-assisted techniques. CSET '21: Cyber Security Experimentation and Test Workshop August 2021, 9–16.

Capgemini Research Institute (2019). Reinventing cybersecurity with artificial intelligence, the new frontier

in digital security.

Costa, R., & Yu, S. M. (2018). Towards integrating human subject matter experts and autonomous systems. Proceedings of the 17th International Conference on Autonomous Agents and Multiagent Systems.

Angrist, J. D., & Pischke, J. S. (2009). Mostly harmless econometrics.

Bono, J., & Xu, A. (2024). Randomized controlled trials for security copilot for econometrics' administrators. Unpublished Working Paper.

Brynjolfsson, E., Li, D., & Raymond, L. R. (2023). Generative ai at work: Measuring the clinical and economic outcomes associated with delivery systems. NBER Working Papers, 31161.

Choi, J. H. (2023). Ai assistance in legal analysis: An empirical study. SSRN.

Ciani, E., & Fisher, P. (2020). Dif-in-dif estimators of multiplicative treatment effects. Journal of Econometric Methods, 8(1).

Cui, Z., Demirer, M., Jaffe, S., Musolff, L., Peng, S., & Salz, T. (2024). The effects of generative ai on high skilled work: Evidence from three field experiments with software developers. SSRN.

Krishna, K. (2020). Towards autonomous AI: Unifying reinforcement learning, generative models, and explainable AI for next-generation systems. Journal of Emerging Technologies and Innovative Research, 7(4).

Kunle-Lawanson, N. O. (2022). The role of AI in information security risk management. World Journal of Advanced Engineering Technology and Sciences, 7(2), 308–319.

Mehra, A. D. (2020). Unifying adversarial robustness and interpretability in deep neural networks: A comprehensive framework for explainable and secure machine learning models. International Research Journal of Modernization in Engineering Technology and Science, 2.

Murthy, P. (2020). Optimizing cloud resource allocation using advanced AI techniques: A comparative study of reinforcement learning and genetic algorithms in multi- cloud environments. World Journal of Advanced Research and Reviews, 2.

Vectra AI. (2023). 2023 state of threat detection.

Downloads

Published

2025-10-18

How to Cite

Synergizing Generative AI and Explainable Machine Learning in Security Operations Centers: Mitigating Alert Fatigue and Enhancing Analyst Performance. (2025). International Journal of Cyber Threat Intelligence and Secure Networking, 2(10), 18-22. https://aimjournals.com/index.php/ijctisn/article/view/361

How to Cite

Synergizing Generative AI and Explainable Machine Learning in Security Operations Centers: Mitigating Alert Fatigue and Enhancing Analyst Performance. (2025). International Journal of Cyber Threat Intelligence and Secure Networking, 2(10), 18-22. https://aimjournals.com/index.php/ijctisn/article/view/361

Similar Articles

1-10 of 22

You may also start an advanced similarity search for this article.