1-2hit |
In the cellular system, the Worst Case User (WCU), whose distances to three nearest BSs are the similar, usually achieves the lowest performance. Improving user performance, especially the WCU, is a big problem for both network designers and operators. This paper works on the WCU in terms of coverage probability analysis by the stochastic geometry tool and data rate optimization with the transmission power constraint by the reinforcement learning technique under the Stretched Pathloss Model (SPLM). In analysis, only fast fading from the WCU to the serving Base Stations (BSs) is taken into the analysis to derive the lower bound coverage probability. Furthermore, the paper assumes that the Coordinated Multi-Point (CoMP) technique is only employed for the WCU to enhance its downlink signal and avoid the explosion of Intercell Interference (ICI). Through the analysis and simulation, the paper states that to improve the WCU performance under bad wireless environments, an increase in transmission power can be a possible solution. However, in good environments, the deployment of advanced techniques such as Joint Transmission (JT), Joint Scheduling (JS), and reinforcement learning is an suitable solution.
Yong TIAN Peng WANG Xinyue HOU Junpeng YU Xiaoyan PENG Hongshu LIAO Lin GAO
The electromagnetic environment is increasingly complex and changeable, and radar needs to meet the execution requirements of various tasks. Modern radars should improve their intelligence level and have the ability to learn independently in dynamic countermeasures. It can make the radar countermeasure strategy change from the traditional fixed anti-interference strategy to dynamically and independently implementing an efficient anti-interference strategy. Aiming at the performance optimization of target tracking in the scene where multiple signals coexist, we propose a countermeasure method of cognitive radar based on a deep Q-learning network. In this paper, we analyze the tracking performance of this method and the Markov Decision Process under the triangular frequency sweeping interference, respectively. The simulation results show that reinforcement learning has substantial autonomy and adaptability for solving such problems.