Cache-enabled reinforcement learning scheme for power allocation and user selection in opportunistic downlink NOMA transmissions

Ahmad Gendia, Osamu Muta, Ahmed Nasser

研究成果: ジャーナルへの寄稿学術誌査読

3 被引用数 (Scopus)

抄録

Non-orthogonal multiple access (NOMA) allows multiple user equipment (UE) to simultaneously share the same resource blocks using varying levels of transmit power at the base station (BS) side. Proper allocation of transmission power and selection of candidate users for pairing over the same resource block are critical for an efficient utilization of the available resources. Optimal UE selection and power splitting among paired UEs can be made through an exhaustive search over the space of all possible solutions. However, the cost incurred by such approach can render it practically infeasible. Reinforcement learning (RL) deploying double deep-Q networks (DDQN) is a promising framework that can be adopted for tackling the problem. In this article, an RL-based DDQN scheme is proposed for user pairing in opportunistic access to downlink NOMA systems with capacity-limited backhaul link connections. The proposed algorithm relies on proactive data caching to alleviate the throttling caused by backhaul bottlenecks, and optimized UE selection and power allocation are accomplished through the continuous interaction between an RL agent and the NOMA environment to increase the overall system throughput. Simulation results are presented to showcase the near-optimal strategy achieved by the proposed scheme.

本文言語英語
ページ(範囲)722-731
ページ数10
ジャーナルIEEJ Transactions on Electrical and Electronic Engineering
17
5
DOI
出版ステータス出版済み - 5月 2022

!!!All Science Journal Classification (ASJC) codes

  • 電子工学および電気工学

フィンガープリント

「Cache-enabled reinforcement learning scheme for power allocation and user selection in opportunistic downlink NOMA transmissions」の研究トピックを掘り下げます。これらがまとまってユニークなフィンガープリントを構成します。

引用スタイル