Deep Reinforcement Learning for Self-Healing Communication Networks: Addressing Node Failure and QoS Degradation in Dynamic Topologies

Authors

  • G.Menaka Professor of Computer Science, Vice-Principal, Vivekanandha college of Arts and sciences for women (Autonomous), Elayampalayam, Tiruchengode-637205 https://orcid.org/0009-0003-2549-5521
  • Anil Kumar School of Computing, DIT University, Makkawala, Dehradun-248009, Uttarakhand, India. https://orcid.org/0000-0003-0982-9424
  • I.B. Sapaev Head of the Department of Physics and Chemistry, Tashkent Institute of Irrigation and Agricultural Mechanization Engineers National Research University, Tashkent, Uzbekistan. Scientific Researcher of the University of Tashkent for Applied Science, Tashkent Uzbekistan. School of Engineering, Central Asian University, Tashkent-111221, Uzbekistan. https://orcid.org/0000-0003-2365-1554
  • Abdullayev Dadaxon Research Scholar (Agriculture), Department of Fruits and Vegetable Growing, Urgench State University, 14, Kh. Alimdjan Str, 220100 Urganch, Khorezm, Uzbekistan. https://orcid.org/0009-0009-8583-2538
  • Sardor Ulkanov Senior Teacher, Department of Transport Logistics, Andijan State Technical Institute, Andijan, Uzbekistan https://orcid.org/0009-0005-2466-3591
  • R.Praveenkumar Associate Professor, Department of Electronics and Communication Engineering, Nandha Engineering College, Erode - 638052, Tamilnadu, India https://orcid.org/0009-0008-5129-9096

DOI:

https://doi.org/10.31838/NJAP/07.02.19

Keywords:

Deep Reinforcement Learning, Self-Healing Networks, Communication Topology, QoS Optimization, PPO Algorithm, Autonomous Routing

Abstract

Current challenges to maintain service continuity and quality of service (QoS) in modern communication networks (e.g., ad hoc, vehicular and IoT driven networks) remain exacerbated in the presence of high node failure rates and dramatic topology changes. Traditional routing and recovery mechanisms, which are largely reactive or configured in a static fashion, are unfit to adapt to this level of real time disruption, thus causing additional latency, reliability issues, and degraded service. This thesis proposes a novel Deep Reinforcement Learning-based Self-Healing Framework (DRL-SHF) to consider these limitations and to develop an algorithm to automatically reconfigure network paths to deal with failures by the means of an autonomous and an adaptive approach. To continuously learn optimal routing strategies for the network, we model the network as a Markov Decision Process (MDP) and utilize Proximal Policy Optimization (PPO) an advanced DRL algorithm, and facilitate GAE to stabilize learning. By proactively observing network state, predicting where the failures are most likely to occur, and sending data through (reservable) alternate optimal paths, the system guarantees low latency, energy efficiency, and QoS aware communication. Through simulation experiments on NS-3 that combine realistic failure and mobility models with DRL based approaches, we illustrate that relying on DRL-SHF leads to a 32.6% reduction in packet loss compared to heuristic and conventional RL based methods along with an improvement in average latency by 27.8% and throughput by 18.4%. These findings validate the use of the framework for deployment in next generation self-organizing networks focused on 5G, IoTs and mission critical communications scenarios where real time resilience and autonomy are critical.

Downloads

Published

2025-08-21

How to Cite

G.Menaka, Anil Kumar, I.B.Sapaev, Abdullayev Dadaxon, Sardor Ulkanov, & R.Praveenkumar. (2025). Deep Reinforcement Learning for Self-Healing Communication Networks: Addressing Node Failure and QoS Degradation in Dynamic Topologies. National Journal of Antennas and Propagation, 7(2), 133-144. https://doi.org/10.31838/NJAP/07.02.19

Similar Articles

11-20 of 115

You may also start an advanced similarity search for this article.

Most read articles by the same author(s)