Optimized Signal-Driven Learning-Based Control Strategy for Decentralized Agents in Adversarial Communication Environments
Abstract
The increasing deployment of decentralized multi-agent systems (MAS) in cyber-physical infrastructures has intensified concerns regarding robustness under adversarial communication environments. These environments, characterized by denial-of-service (DoS), false data injection (FDI), and coordinated cyber-attacks, significantly degrade system stability and cooperative performance. This study proposes an optimized signal-driven learning-based control strategy that integrates adaptive event-triggered mechanisms with reinforcement learning (RL) paradigms to enhance resilience and efficiency in decentralized agents. The proposed framework leverages signal-driven triggering conditions to minimize communication overhead while ensuring stability under adversarial disruptions. A hybrid architecture combining adaptive control, observer-based estimation, and actor–critic reinforcement learning is developed to dynamically compensate for uncertainties, disturbances, and malicious signal manipulations.
The theoretical foundation of the proposed strategy is established through Lyapunov stability analysis, ensuring boundedness and convergence properties despite intermittent communication failures. Additionally, the framework incorporates memory-based event-triggering and predictive estimation to mitigate the effects of DoS attacks and packet losses. The integration of learning-based optimization enables agents to adaptively refine control policies in real-time, improving performance in uncertain and adversarial environments. Comparative analysis with existing resilient and event-triggered control approaches demonstrates that the proposed method achieves superior communication efficiency, robustness, and convergence speed.
Simulation-based evaluation reveals that the optimized strategy maintains consensus and tracking performance even under severe attack scenarios, outperforming traditional model-based and purely adaptive approaches. The results indicate a significant reduction in communication load without compromising system stability. This work contributes to the advancement of resilient decentralized control by bridging signal-driven control mechanisms with learning-based optimization, offering a scalable and robust solution for next-generation intelligent networked systems.
Keywords
References
Similar Articles
- Nabeel Ehsan, Deep Learning for Continuous Auditing & Real-Time Assurance , International Journal of Advanced Artificial Intelligence Research: Vol. 3 No. 04 (2026): Volume 03 Issue 04
- Adrian Velasco, Meera Narayan, REVOLUTIONIZING SILICON PHOTONIC DEVICE DESIGN THROUGH DEEP GENERATIVE MODELS: AN INVERSE APPROACH AND EMERGING TRENDS , International Journal of Advanced Artificial Intelligence Research: Vol. 2 No. 06 (2025): Volume 02 Issue 06
- Olabayoji Oluwatofunmi Oladepo., Opeyemi Eebru Alao, EXPLAINABLE MACHINE LEARNING FOR FINANCIAL ANALYSIS , International Journal of Advanced Artificial Intelligence Research: Vol. 2 No. 07 (2025): Volume 02 Issue 07
- Dr. Elias T. Vance, Prof. Camille A. Lefevre, ENHANCING TRUST AND CLINICAL ADOPTION: A SYSTEMATIC LITERATURE REVIEW OF EXPLAINABLE ARTIFICIAL INTELLIGENCE (XAI) APPLICATIONS IN HEALTHCARE , International Journal of Advanced Artificial Intelligence Research: Vol. 2 No. 10 (2025): Volume 02 Issue 10
- Dr. Alejandro Moreno, An Explainable, Context-Aware Zero-Trust Identity Architecture for Continuous Authentication in Hybrid Device Ecosystems , International Journal of Advanced Artificial Intelligence Research: Vol. 2 No. 11 (2025): Volume 02 Issue 11
You may also start an advanced similarity search for this article.