A Multi-agent Deep Reinforcement Learning Framework for Resource Allocation Optimization in Agricultural Heterogeneous Network


Wu K., Li Y., Nie J., Zhao J., ERCİŞLİ S.

IEEE Transactions on Consumer Electronics, 2026 (SCI-Expanded, Scopus) identifier

  • Yayın Türü: Makale / Tam Makale
  • Basım Tarihi: 2026
  • Doi Numarası: 10.1109/tce.2026.3681349
  • Dergi Adı: IEEE Transactions on Consumer Electronics
  • Derginin Tarandığı İndeksler: Science Citation Index Expanded (SCI-EXPANDED), Scopus, Compendex, INSPEC
  • Anahtar Kelimeler: Deep reinforcement learning, Federated learning, Heterogeneous network, IoT, Resource optimization
  • Atatürk Üniversitesi Adresli: Evet

Özet

With the rapid development of smart terminal devices within the consumer electronics sector, optimizing resources across heterogeneous networks has emerged as a critical challenge for enhancing user experience. This paper proposes the multi-agent deep reinforcement learning-based agricultural resource optimization algorithm (MAD3QN), based on multi-agent deep reinforcement learning, to address core issues such as inefficient resource allocation and unstable communication quality within smart device networks. The algorithm integrates a three-layer federated learning framework with a heterogeneous network resource allocation model. It effectively resolves Q-value overestimation through a Dueling Double DQN structure, while employing a multi-agent coordination mechanism to counter interference challenges in complex dynamic environments. Experimental results demonstrate that under a 50-user scenario, the system achieves a capacity of 199.3 Mbps, representing a 3.4% improvement over the next-best algorithm. In high-interference environments, the quality of service (QoS) satisfaction rate reaches 92.7%, with joint energy-spectrum efficiency significantly outperforming existing methods. This provides an efficient and reliable solution for future resource optimization in agricultural IoT environments, offering broad application prospects and practical value.