The search functionality is under construction.
The search functionality is under construction.

Keyword Search Result

[Keyword] allocation(549hit)

1-20hit(549hit)

  • Joint Optimization of Task Offloading and Resource Allocation for UAV-Assisted Edge Computing: A Stackelberg Bilayer Game Approach Open Access

    Peng WANG  Guifen CHEN  Zhiyao SUN  

     
    PAPER-Information Network

      Pubricized:
    2024/05/21
      Vol:
    E107-D No:9
      Page(s):
    1174-1181

    Unmanned Aerial Vehicle (UAV)-assisted Mobile Edge Computing (MEC) can provide mobile users (MU) with additional computing services and a wide range of connectivity. This paper investigates the joint optimization strategy of task offloading and resource allocation for UAV-assisted MEC systems in complex scenarios with the goal of reducing the total system cost, consisting of task execution latency and energy consumption. We adopt a game theoretic approach to model the interaction process between the MEC server and the MU Stackelberg bilayer game model. Then, the original problem with complex multi-constraints is transformed into a duality problem using the Lagrangian duality method. Furthermore, we prove that the modeled Stackelberg bilayer game has a unique Nash equilibrium solution. In order to obtain an approximate optimal solution to the proposed problem, we propose a two-stage alternating iteration (TASR) algorithm based on the subgradient method and the marginal revenue optimization method. We evaluate the effective performance of the proposed algorithm through detailed simulation experiments. The simulation results show that the proposed algorithm is superior and robust compared to other benchmark methods and can effectively reduce the task execution latency and total system cost in different scenarios.

  • A Joint Coverage Constrained Task Offloading and Resource Allocation Method in MEC Open Access

    Daxiu ZHANG  Xianwei LI  Bo WEI  Yukun SHI  

     
    PAPER-Mobile Information Network and Personal Communications

      Vol:
    E107-A No:8
      Page(s):
    1277-1285

    With the increase of the number of Mobile User Equipments (MUEs), numerous tasks that with high requirements of resources are generated. However, the MUEs have limited computational resources, computing power and storage space. In this paper, a joint coverage constrained task offloading and resource allocation method based on deep reinforcement learning is proposed. The aim is offloading the tasks that cannot be processed locally to the edge servers to alleviate the conflict between the resource constraints of MUEs and the high performance task processing. The studied problem considers the dynamic variability and complexity of the system model, coverage, offloading decisions, communication relationships and resource constraints. An entropy weight method is used to optimize the resource allocation process and balance the energy consumption and execution time. The results of the study show that the number of tasks and MUEs affects the execution time and energy consumption of the task offloading and resource allocation processes in the interest of the service provider, and enhances the user experience.

  • Sum Rate Maximization for Multiuser Full-Duplex Wireless Powered Communication Networks Open Access

    Keigo HIRASHIMA  Teruyuki MIYAJIMA  

     
    PAPER-Wireless Communication Technologies

      Vol:
    E107-B No:8
      Page(s):
    564-572

    In this paper, we consider an orthogonal frequency division multiple access (OFDMA)-based multiuser full-duplex wireless powered communication network (FD WPCN) system with beamforming (BF) at an energy transmitter (ET). The ET performs BF to efficiently transmit energy to multiple users while suppressing interference to an information receiver (IR). Multiple users operating in full-duplex mode harvest energy from the signals sent by the ET while simultaneously transmitting information to the IR using the harvested energy. We analytically demonstrate that the FD WPCN is superior to its half-duplex (HD) WPCN counterpart in the high-SNR regime. We propose a transmitter design method that maximizes the sum rate by determining the BF at the ET, power allocation at both the ET and users, and sub-band allocation. Simulation results show the effectiveness of the proposed method.

  • Agent Allocation-Action Learning with Dynamic Heterogeneous Graph in Multi-Task Games Open Access

    Xianglong LI  Yuan LI  Jieyuan ZHANG  Xinhai XU  Donghong LIU  

     
    PAPER-Artificial Intelligence, Data Mining

      Pubricized:
    2024/04/03
      Vol:
    E107-D No:8
      Page(s):
    1040-1049

    In many real-world problems, a complex task is typically composed of a set of subtasks that follow a certain execution order. Traditional multi-agent reinforcement learning methods perform poorly in such multi-task cases, as they consider the whole problem as one task. For such multi-agent multi-task problems, heterogeneous relationships i.e., subtask-subtask, agent-agent, and subtask-agent, are important characters which should be explored to facilitate the learning performance. This paper proposes a dynamic heterogeneous graph based agent allocation-action learning framework. Specifically, a dynamic heterogeneous graph model is firstly designed to characterize the variation of heterogeneous relationships with the time going on. Then a multi-subgraph partition method is invented to extract features of heterogeneous graphs. Leveraging the extracted features, a hierarchical framework is designed to learn the dynamic allocation of agents among subtasks, as well as cooperative behaviors. Experimental results demonstrate that our framework outperforms recent representative methods on two challenging tasks, i.e., SAVETHECITY and Google Research Football full game.

  • Cloud-Edge-Device Collaborative High Concurrency Access Management for Massive IoT Devices in Distribution Grid Open Access

    Shuai LI  Xinhong YOU  Shidong ZHANG  Mu FANG  Pengping ZHANG  

     
    PAPER-Systems and Control

      Pubricized:
    2023/10/26
      Vol:
    E107-A No:7
      Page(s):
    946-957

    Emerging data-intensive services in distribution grid impose requirements of high-concurrency access for massive internet of things (IoT) devices. However, the lack of effective high-concurrency access management results in severe performance degradation. To address this challenge, we propose a cloud-edge-device collaborative high-concurrency access management algorithm based on multi-timescale joint optimization of channel pre-allocation and load balancing degree. We formulate an optimization problem to minimize the weighted sum of edge-cloud load balancing degree and queuing delay under the constraint of access success rate. The problem is decomposed into a large-timescale channel pre-allocation subproblem solved by the device-edge collaborative access priority scoring mechanism, and a small-timescale data access control subproblem solved by the discounted empirical matching mechanism (DEM) with the perception of high-concurrency number and queue backlog. Particularly, information uncertainty caused by externalities is tackled by exploiting discounted empirical performance which accurately captures the performance influence of historical time points on present preference value. Simulation results demonstrate the effectiveness of the proposed algorithm in reducing edge-cloud load balancing degree and queuing delay.

  • Federated Deep Reinforcement Learning for Multimedia Task Offloading and Resource Allocation in MEC Networks Open Access

    Rongqi ZHANG  Chunyun PAN  Yafei WANG  Yuanyuan YAO  Xuehua LI  

     
    PAPER-Network

      Vol:
    E107-B No:6
      Page(s):
    446-457

    With maturation of 5G technology in recent years, multimedia services such as live video streaming and online games on the Internet have flourished. These multimedia services frequently require low latency, which pose a significant challenge to compute the high latency requirements multimedia tasks. Mobile edge computing (MEC), is considered a key technology solution to address the above challenges. It offloads computation-intensive tasks to edge servers by sinking mobile nodes, which reduces task execution latency and relieves computing pressure on multimedia devices. In order to use MEC paradigm reasonably and efficiently, resource allocation has become a new challenge. In this paper, we focus on the multimedia tasks which need to be uploaded and processed in the network. We set the optimization problem with the goal of minimizing the latency and energy consumption required to perform tasks in multimedia devices. To solve the complex and non-convex problem, we formulate the optimization problem as a distributed deep reinforcement learning (DRL) problem and propose a federated Dueling deep Q-network (DDQN) based multimedia task offloading and resource allocation algorithm (FDRL-DDQN). In the algorithm, DRL is trained on the local device, while federated learning (FL) is responsible for aggregating and updating the parameters from the trained local models. Further, in order to solve the not identically and independently distributed (non-IID) data problem of multimedia devices, we develop a method for selecting participating federated devices. The simulation results show that the FDRL-DDQN algorithm can reduce the total cost by 31.3% compared to the DQN algorithm when the task data is 1000 kbit, and the maximum reduction can be 35.3% compared to the traditional baseline algorithm.

  • Performance of Collaborative MIMO Reception with User Grouping Schemes

    Eiku ANDO  Yukitoshi SANADA  

     
    PAPER-Wireless Communication Technologies

      Pubricized:
    2023/10/23
      Vol:
    E107-B No:1
      Page(s):
    253-261

    This paper proposes user equipment (UE) grouping schemes and evaluates the performance of a scheduling scheme for each formed group in collaborative multiple-input multiple-output (MIMO) reception. In previous research, the criterion for UE grouping and the effects of group scheduling has never been presented. In the UE grouping scheme, two criteria, the base station (BS)-oriented one and the UE-oriented one, are presented. The BS-oriented full search scheme achieves ideal performance though it requires knowledge of the relative positions of all UEs. Therefore, the UE-oriented local search scheme is also proposed. As the scheduling scheme, proportional fairness scheduling is used in resource allocation for each formed group. When the number of total UEs increases, the difference in the number of UEs among groups enlarges. Numerical results obtained through computer simulation show that the throughput per user increases and the fairness among users decreases when the number of UEs in a cell increases in the proposed schemes compared to those of the conventional scheme.

  • Crosstalk-Aware Resource Allocation Based on Optical Path Adjacency and Crosstalk Budget for Space Division Multiplexing Elastic Optical Networks

    Kosuke KUBOTA  Yosuke TANIGAWA  Yusuke HIROTA  Hideki TODE  

     
    PAPER

      Pubricized:
    2023/09/12
      Vol:
    E107-B No:1
      Page(s):
    27-38

    To cope with the drastic increase in traffic, space division multiplexing elastic optical networks (SDM-EONs) have been investigated. In multicore fiber environments that realize SDM-EONs, crosstalk (XT) occurs between optical paths transmitted in the same frequency slots of adjacent cores, and the quality of the optical paths is degraded by the mutual influence of XT. To solve this problem, we propose a core and spectrum assignment method that introduces the concept of prohibited frequency slots to protect the degraded optical paths. First-fit-based spectrum resource allocation algorithms, including our previous study, have the problem that only some frequency slots are used at low loads, and XT occurs even though sufficient frequency slots are available. In this study, we propose a core and spectrum assignment method that introduces the concepts of “adjacency criterion” and “XT budget” to suppress XT at low and middle loads without worsening the path blocking rate at high loads. We demonstrate the effectiveness of the proposed method in terms of the path blocking rate using computer simulations.

  • Power Allocation with QoS and Max-Min Fairness Constraints for Downlink MIMO-NOMA System Open Access

    Jia SHAO  Cong LI  Taotao YAN  

     
    PAPER-Transmission Systems and Transmission Equipment for Communications

      Pubricized:
    2023/09/06
      Vol:
    E106-B No:12
      Page(s):
    1411-1417

    Non-orthogonal multipe access based multiple-input multiple-output system (MIMO-NOMA) has been widely used in improving user's achievable rate of millimeter wave (mmWave) communication. To meet different requirements of each user in multi-user beams, this paper proposes a power allocation algorithm to satisfy the quality of service (QoS) of head user while maximizing the minimum rate of edge users from the perspective of max-min fairness. Suppose that the user who is closest to the base station (BS) is the head user and the other users are the edge users in each beam in this paper. Then, an optimization problem model of max-min fairness criterion is developed under the constraints of users' minimum rate requirements and the total transmitting power of the BS. The bisection method and Karush-Kuhn-Tucher (KKT) conditions are used to solve this complex non-convex problem, and simulation results show that both the minimum achievable rates of edge users and the average rate of all users are greatly improved significantly compared with the traditional MIMO-NOMA, which only consider max-min fairness of users.

  • User Scheduling and Clustering for Distributed Antenna Network Using Quantum Computing

    Keishi HANAKAGO  Ryo TAKAHASHI  Takahiro OHYAMA  Fumiyuki ADACHI  

     
    PAPER-Wireless Communication Technologies

      Pubricized:
    2023/07/24
      Vol:
    E106-B No:11
      Page(s):
    1210-1218

    In this study, an overloaded large-scale distributed antenna network is considered, for which the number of active users is larger than that of antennas distributed in a base station coverage area (called a cell). To avoid overload, users in each cell are divided into multiple user groups, and, to reduce the computational complexity required for multi-user multiple-input and multiple-output (MU-MIMO), users in each user group are grouped into multiple user clusters so that cluster-wise distributed MU-MIMO can be performed in parallel in each user group. However, as the network size increases, conventional computational methods may not be able to solve combinatorial optimization problems, such as user scheduling and user clustering, which are required for performing cluster-wise distributed MU-MIMO in a finite amount of time. In this study, we apply quantum computing to solve the combinatorial optimization problems of user scheduling and clustering for an overloaded distributed antenna network and propose a quantum computing-based user scheduling and clustering method. The results of computer simulations indicate that as the technology of quantum computers and their related algorithms evolves in the future, the proposed method can realize large-scale dense wireless systems and realize real-time optimization with a short optimization execution cycle.

  • Low Complexity Resource Allocation in Frequency Domain Non-Orthogonal Multiple Access Open Access

    Satoshi DENNO  Taichi YAMAGAMI  Yafei HOU  

     
    PAPER-Wireless Communication Technologies

      Pubricized:
    2023/05/08
      Vol:
    E106-B No:10
      Page(s):
    1004-1014

    This paper proposes low complexity resource allocation in frequency domain non-orthogonal multiple access where many devices access with a base station. The number of the devices is assumed to be more than that of the resource for network capacity enhancement, which is demanded in massive machine type communications (mMTC). This paper proposes two types of resource allocation techniques, all of which are based on the MIN-MAX approach. One of them seeks for nicer resource allocation with only channel gains. The other technique applies the message passing algorithm (MPA) for better resource allocation. The proposed resource allocation techniques are evaluated by computer simulation in frequency domain non-orthogonal multiple access. The proposed technique with the MPA achieves the best bit error rate (BER) performance in the proposed techniques. However, the computational complexity of the proposed techniques with channel gains is much smaller than that of the proposed technique with the MPA, whereas the BER performance of the proposed techniques with channel gains is only about 0.1dB inferior to that with the MPA in the multiple access with the overloading ratio of 1.5 at the BER of 10-4. They attain the gain of about 10dB at the BER of 10-4 in the multiple access with the overloading ration of 2.0. Their complexity is 10-16 as small as the conventional technique.

  • Backup Resource Allocation Model with Probabilistic Protection Considering Service Delay

    Shinya HORIMOTO  Fujun HE  Eiji OKI  

     
    PAPER-Network

      Pubricized:
    2023/03/24
      Vol:
    E106-B No:9
      Page(s):
    798-816

    This paper proposes a backup resource allocation model for virtual network functions (VNFs) to minimize the total allocated computing capacity for backup with considering the service delay. If failures occur to primary hosts, the VNFs in failed hosts are recovered by backup hosts whose allocation is pre-determined. We introduce probabilistic protection, where the probability that the protection by a backup host fails is limited within a given value; it allows backup resource sharing to reduce the total allocated computing capacity. The previous work does not consider the service delay constraint in the backup resource allocation problem. The proposed model considers that the probability that the service delay, which consists of networking delay between hosts and processing delay in each VNF, exceeds its threshold is constrained within a given value. We introduce a basic algorithm to solve our formulated delay-constraint optimization problem. In a problem with the size that cannot be solved within an acceptable computation time limit by the basic algorithm, we develop a simulated annealing algorithm incorporating Yen's algorithm to handle the delay constraint heuristically. We observe that both algorithms in the proposed model reduce the total allocated computing capacity by up to 56.3% compared to a baseline; the simulated annealing algorithm can get feasible solutions in problems where the basic algorithm cannot.

  • Level Allocation of Four-Level Pulse-Amplitude Modulation Signal in Optically Pre-Amplified Receiver Systems

    Hiroki KAWAHARA  Koji IGARASHI  Kyo INOUE  

     
    PAPER-Fiber-Optic Transmission for Communications

      Pubricized:
    2023/02/03
      Vol:
    E106-B No:8
      Page(s):
    652-659

    This study numerically investigates the symbol-level allocation of four-level pulse-amplitude modulation (PAM4) signals for optically pre-amplified receiver systems. Three level-allocation schemes are examined: intensity-equispaced, amplitude-equispaced, and numerically optimized. Numerical simulations are conducted to comprehensively compare the receiver sensitivities for these level-allocation schemes under various system conditions. The results show that the superiority or inferiority between the level allocations is significantly dependent on the system conditions of the bandwidth of amplified spontaneous emission light, modulation bandwidth, and signal extinction ratio (ER). The mechanisms underlying these dependencies are also discussed.

  • UE Set Selection for RR Scheduling in Distributed Antenna Transmission with Reinforcement Learning Open Access

    Go OTSURU  Yukitoshi SANADA  

     
    PAPER-Wireless Communication Technologies

      Pubricized:
    2023/01/13
      Vol:
    E106-B No:7
      Page(s):
    586-594

    In this paper, user set selection in the allocation sequences of round-robin (RR) scheduling for distributed antenna transmission with block diagonalization (BD) pre-coding is proposed. In prior research, the initial phase selection of user equipment allocation sequences in RR scheduling has been investigated. The performance of the proposed RR scheduling is inferior to that of proportional fair (PF) scheduling under severe intra-cell interference. In this paper, the multi-input multi-output technology with BD pre-coding is applied. Furthermore, the user equipment (UE) sets in the allocation sequences are eliminated with reinforcement learning. After the modification of a RR allocation sequence, no estimated throughput calculation for UE set selection is required. Numerical results obtained through computer simulation show that the maximum selection, one of the criteria for initial phase selection, outperforms the weighted PF scheduling in a restricted realm in terms of the computational complexity, fairness, and throughput.

  • Shared Backup Allocation Model of Middlebox Based on Workload-Dependent Failure Rate

    Han ZHANG  Fujun HE  Eiji OKI  

     
    PAPER-Network

      Pubricized:
    2022/11/11
      Vol:
    E106-B No:5
      Page(s):
    427-438

    With the network function virtualization technology, a middlebox can be deployed as software on commercial servers rather than on dedicated physical servers. A backup server is necessary to ensure the normal operation of the middlebox. The workload can affect the failure rate of backup server; the impact of workload-dependent failure rate on backup server allocation considering unavailability has not been extensively studied. This paper proposes a shared backup allocation model of middlebox with consideration of the workload-dependent failure rate of backup server. Backup resources on a backup server can be assigned to multiple functions. We observe that a function has four possible states and analyze the state transitions within the system. Through the queuing approach, we compute the probability of each function being available or unavailable for a certain assignment, and obtain the unavailability of each function. The proposed model is designed to find an assignment that minimizes the maximum unavailability among functions. We develop a simulated annealing algorithm to solve this problem. We evaluate and compare the performances of proposed and baseline models under different experimental conditions. Based on the results, we observe that, compared to the baseline model, the proposed model reduces the maximum unavailability by an average of 29% in our examined cases.

  • Edge Computing Resource Allocation Algorithm for NB-IoT Based on Deep Reinforcement Learning

    Jiawen CHU  Chunyun PAN  Yafei WANG  Xiang YUN  Xuehua LI  

     
    PAPER-Network

      Pubricized:
    2022/11/04
      Vol:
    E106-B No:5
      Page(s):
    439-447

    Mobile edge computing (MEC) technology guarantees the privacy and security of large-scale data in the Narrowband-IoT (NB-IoT) by deploying MEC servers near base stations to provide sufficient computing, storage, and data processing capacity to meet the delay and energy consumption requirements of NB-IoT terminal equipment. For the NB-IoT MEC system, this paper proposes a resource allocation algorithm based on deep reinforcement learning to optimize the total cost of task offloading and execution. Since the formulated problem is a mixed-integer non-linear programming (MINLP), we cast our problem as a multi-agent distributed deep reinforcement learning (DRL) problem and address it using dueling Q-learning network algorithm. Simulation results show that compared with the deep Q-learning network and the all-local cost and all-offload cost algorithms, the proposed algorithm can effectively guarantee the success rates of task offloading and execution. In addition, when the execution task volume is 200KBit, the total system cost of the proposed algorithm can be reduced by at least 1.3%, and when the execution task volume is 600KBit, the total cost of system execution tasks can be reduced by 16.7% at most.

  • Migration Model for Distributed Server Allocation

    Souhei YANASE  Fujun HE  Haruto TAKA  Akio KAWABATA  Eiji OKI  

     
    PAPER-Network Management/Operation

      Pubricized:
    2022/07/05
      Vol:
    E106-B No:1
      Page(s):
    44-56

    This paper proposes a migration model for distributed server allocation. In distributed server allocation, each user is assigned to a server to minimize the communication delay. In the conventional model, a user cannot migrate to another server to avoid instability. We develop a model where each user can migrate to another server while receiving services. We formulate the proposed model as an integer linear programming problem. We prove that the considered problem is NP-complete. We introduce a heuristic algorithm. Numerical result shows that the proposed model reduces the average communication delay by 59% compared to the conventional model at most.

  • Robust Optimization Model for Primary and Backup Capacity Allocations against Multiple Physical Machine Failures under Uncertain Demands in Cloud

    Mitsuki ITO  Fujun HE  Kento YOKOUCHI  Eiji OKI  

     
    PAPER-Network

      Pubricized:
    2022/07/05
      Vol:
    E106-B No:1
      Page(s):
    18-34

    This paper proposes a robust optimization model for probabilistic protection under uncertain capacity demands to minimize the total required capacity against multiple simultaneous failures of physical machines. The proposed model determines both primary and backup virtual machine allocations simultaneously under the probabilistic protection guarantee. To express the uncertainty of capacity demands, we introduce an uncertainty set that considers the upper bound of the total demand and the upper and lower bounds of each demand. The robust optimization technique is applied to the optimization model to deal with two uncertainties: failure event and capacity demand. With this technique, the model is formulated as a mixed integer linear programming (MILP) problem. To solve larger sized problems, a simulated annealing (SA) heuristic is introduced. In SA, we obtain the capacity demands by solving maximum flow problems. Numerical results show that our proposed model reduces the total required capacity compared with the conventional model by determining both primary and backup virtual machine allocations simultaneously. We also compare the results of MILP, SA, and a baseline greedy algorithm. For a larger sized problem, we obtain approximate solutions in a practical time by using SA and the greedy algorithm.

  • Reinforcement Learning for QoS-Constrained Autonomous Resource Allocation with H2H/M2M Co-Existence in Cellular Networks

    Xing WEI  Xuehua LI  Shuo CHEN  Na LI  

     
    PAPER

      Pubricized:
    2022/05/27
      Vol:
    E105-B No:11
      Page(s):
    1332-1341

    Machine-to-Machine (M2M) communication plays a pivotal role in the evolution of Internet of Things (IoT). Cellular networks are considered to be a key enabler for M2M communications, which are originally designed mainly for Human-to-Human (H2H) communications. The introduction of M2M users will cause a series of problems to traditional H2H users, i.e., interference between various traffic. Resource allocation is an effective solution to these problems. In this paper, we consider a shared resource block (RB) and power allocation in an H2H/M2M coexistence scenario, where M2M users are subdivided into delay-tolerant and delay-sensitive types. We first model the RB-power allocation problem as maximization of capacity under Quality-of-Service (QoS) constraints of different types of traffic. Then, a learning framework is introduced, wherein a complex agent is built from simpler subagents, which provides the basis for distributed deployment scheme. Further, we proposed distributed Q-learning based autonomous RB-power allocation algorithm (DQ-ARPA), which enables the machine type network gateways (MTCG) as agents to learn the wireless environment and choose the RB-power autonomously to maximize M2M pairs' capacity while ensuring the QoS requirements of critical services. Simulation results indicates that with an appropriate reward design, our proposed scheme succeeds in reducing the impact of delay-tolerant machine type users on critical services in terms of SINR thresholds and outage ratios.

  • Penalized and Decentralized Contextual Bandit Learning for WLAN Channel Allocation with Contention-Driven Feature Extraction

    Kota YAMASHITA  Shotaro KAMIYA  Koji YAMAMOTO  Yusuke KODA  Takayuki NISHIO  Masahiro MORIKURA  

     
    PAPER-Terrestrial Wireless Communication/Broadcasting Technologies

      Pubricized:
    2022/04/11
      Vol:
    E105-B No:10
      Page(s):
    1268-1279

    In this study, a contextual multi-armed bandit (CMAB)-based decentralized channel exploration framework disentangling a channel utility function (i.e., reward) with respect to contending neighboring access points (APs) is proposed. The proposed framework enables APs to evaluate observed rewards compositionally for contending APs, allowing both robustness against reward fluctuation due to neighboring APs' varying channels and assessment of even unexplored channels. To realize this framework, we propose contention-driven feature extraction (CDFE), which extracts the adjacency relation among APs under contention and forms the basis for expressing reward functions in disentangled form, that is, a linear combination of parameters associated with neighboring APs under contention). This allows the CMAB to be leveraged with a joint linear upper confidence bound (JLinUCB) exploration and to delve into the effectiveness of the proposed framework. Moreover, we address the problem of non-convergence — the channel exploration cycle — by proposing a penalized JLinUCB (P-JLinUCB) based on the key idea of introducing a discount parameter to the reward for exploiting a different channel before and after the learning round. Numerical evaluations confirm that the proposed method allows APs to assess the channel quality robustly against reward fluctuations by CDFE and achieves better convergence properties by P-JLinUCB.

1-20hit(549hit)