The search functionality is under construction.

Keyword Search Result

[Keyword] EDGE computing(37hit)

1-20hit(37hit)

  • Reservoir-Based 1D Convolution: Low-Training-Cost AI Open Access

    Yuichiro TANAKA  Hakaru TAMUKOH  

     
    LETTER-Neural Networks and Bioengineering

      Pubricized:
    2023/09/11
      Vol:
    E107-A No:6
      Page(s):
    941-944

    In this study, we introduce a reservoir-based one-dimensional (1D) convolutional neural network that processes time-series data at a low computational cost, and investigate its performance and training time. Experimental results show that the proposed network consumes lower training computational costs and that it outperforms the conventional reservoir computing in a sound-classification task.

  • Data-Quality Aware Incentive Mechanism Based on Stackelberg Game in Mobile Edge Computing Open Access

    Shuyun LUO  Wushuang WANG  Yifei LI  Jian HOU  Lu ZHANG  

     
    PAPER-Mobile Information Network and Personal Communications

      Pubricized:
    2023/09/14
      Vol:
    E107-A No:6
      Page(s):
    873-880

    Crowdsourcing becomes a popular data-collection method to relieve the burden of high cost and latency for data-gathering. Since the involved users in crowdsourcing are volunteers, need incentives to encourage them to provide data. However, the current incentive mechanisms mostly pay attention to the data quantity, while ignoring the data quality. In this paper, we design a Data-quality awaRe IncentiVe mEchanism (DRIVE) for collaborative tasks based on the Stackelberg game to motivate users with high quality, the highlight of which is the dynamic reward allocation scheme based on the proposed data quality evaluation method. In order to guarantee the data quality evaluation response in real-time, we introduce the mobile edge computing framework. Finally, one case study is given and its real-data experiments demonstrate the superior performance of DRIVE.

  • Resource Allocation for Mobile Edge Computing System Considering User Mobility with Deep Reinforcement Learning

    Kairi TOKUDA  Takehiro SATO  Eiji OKI  

     
    PAPER-Network

      Pubricized:
    2023/10/06
      Vol:
    E107-B No:1
      Page(s):
    173-184

    Mobile edge computing (MEC) is a key technology for providing services that require low latency by migrating cloud functions to the network edge. The potential low quality of the wireless channel should be noted when mobile users with limited computing resources offload tasks to an MEC server. To improve the transmission reliability, it is necessary to perform resource allocation in an MEC server, taking into account the current channel quality and the resource contention. There are several works that take a deep reinforcement learning (DRL) approach to address such resource allocation. However, these approaches consider a fixed number of users offloading their tasks, and do not assume a situation where the number of users varies due to user mobility. This paper proposes Deep reinforcement learning model for MEC Resource Allocation with Dummy (DMRA-D), an online learning model that addresses the resource allocation in an MEC server under the situation where the number of users varies. By adopting dummy state/action, DMRA-D keeps the state/action representation. Therefore, DMRA-D can continue to learn one model regardless of variation in the number of users during the operation. Numerical results show that DMRA-D improves the success rate of task submission while continuing learning under the situation where the number of users varies.

  • Minimization of Energy Consumption in TDMA-Based Wireless-Powered Multi-Access Edge Computing Networks

    Xi CHEN  Guodong JIANG  Kaikai CHI  Shubin ZHANG  Gang CHEN  Jiang LIU  

     
    PAPER-Communication Theory and Signals

      Pubricized:
    2023/06/19
      Vol:
    E106-A No:12
      Page(s):
    1544-1554

    Many nodes in Internet of Things (IoT) rely on batteries for power. Additionally, the demand for executing compute-intensive and latency-sensitive tasks is increasing for IoT nodes. In some practical scenarios, the computation tasks of WDs have the non-separable characteristic, that is, binary offloading strategies should be used. In this paper, we focus on the design of an efficient binary offloading algorithm that minimizes system energy consumption (EC) for TDMA-based wireless-powered multi-access edge computing networks, where WDs either compute tasks locally or offload them to hybrid access points (H-APs). We formulate the EC minimization problem which is a non-convex problem and decompose it into a master problem optimizing binary offloading decision and a subproblem optimizing WPT duration and task offloading transmission durations. For the master problem, a DRL based method is applied to obtain the near-optimal offloading decision. For the subproblem, we firstly consider the scenario where the nodes do not have completion time constraints and obtain the optimal analytical solution. Then we consider the scenario with the constraints. By jointly using the Golden Section Method and bisection method, the optimal solution can be obtained due to the convexity of the constraint function. Simulation results show that the proposed offloading algorithm based on DRL can achieve the near-minimal EC.

  • A Network Design Scheme in Delay Sensitive Monitoring Services Open Access

    Akio KAWABATA  Takuya TOJO  Bijoy CHAND CHATTERJEE  Eiji OKI  

     
    PAPER-Network Management/Operation

      Pubricized:
    2023/04/19
      Vol:
    E106-B No:10
      Page(s):
    903-914

    Mission-critical monitoring services, such as finding criminals with a monitoring camera, require rapid detection of newly updated data, where suppressing delay is desirable. Taking this direction, this paper proposes a network design scheme to minimize this delay for monitoring services that consist of Internet-of-Things (IoT) devices located at terminal endpoints (TEs), databases (DB), and applications (APLs). The proposed scheme determines the allocation of DB and APLs and the selection of the server to which TE belongs. DB and APL are allocated on an optimal server from multiple servers in the network. We formulate the proposed network design scheme as an integer linear programming problem. The delay reduction effect of the proposed scheme is evaluated under two network topologies and a monitoring camera system network. In the two network topologies, the delays of the proposed scheme are 78 and 80 percent, compared to that of the conventional scheme. In the monitoring camera system network, the delay of the proposed scheme is 77 percent compared to that of the conventional scheme. These results indicate that the proposed scheme reduces the delay compared to the conventional scheme where APLs are located near TEs. The computation time of the proposed scheme is acceptable for the design phase before the service is launched. The proposed scheme can contribute to a network design that detects newly added objects quickly in the monitoring services.

  • Backup Resource Allocation Model with Probabilistic Protection Considering Service Delay

    Shinya HORIMOTO  Fujun HE  Eiji OKI  

     
    PAPER-Network

      Pubricized:
    2023/03/24
      Vol:
    E106-B No:9
      Page(s):
    798-816

    This paper proposes a backup resource allocation model for virtual network functions (VNFs) to minimize the total allocated computing capacity for backup with considering the service delay. If failures occur to primary hosts, the VNFs in failed hosts are recovered by backup hosts whose allocation is pre-determined. We introduce probabilistic protection, where the probability that the protection by a backup host fails is limited within a given value; it allows backup resource sharing to reduce the total allocated computing capacity. The previous work does not consider the service delay constraint in the backup resource allocation problem. The proposed model considers that the probability that the service delay, which consists of networking delay between hosts and processing delay in each VNF, exceeds its threshold is constrained within a given value. We introduce a basic algorithm to solve our formulated delay-constraint optimization problem. In a problem with the size that cannot be solved within an acceptable computation time limit by the basic algorithm, we develop a simulated annealing algorithm incorporating Yen's algorithm to handle the delay constraint heuristically. We observe that both algorithms in the proposed model reduce the total allocated computing capacity by up to 56.3% compared to a baseline; the simulated annealing algorithm can get feasible solutions in problems where the basic algorithm cannot.

  • Edge Computing Resource Allocation Algorithm for NB-IoT Based on Deep Reinforcement Learning

    Jiawen CHU  Chunyun PAN  Yafei WANG  Xiang YUN  Xuehua LI  

     
    PAPER-Network

      Pubricized:
    2022/11/04
      Vol:
    E106-B No:5
      Page(s):
    439-447

    Mobile edge computing (MEC) technology guarantees the privacy and security of large-scale data in the Narrowband-IoT (NB-IoT) by deploying MEC servers near base stations to provide sufficient computing, storage, and data processing capacity to meet the delay and energy consumption requirements of NB-IoT terminal equipment. For the NB-IoT MEC system, this paper proposes a resource allocation algorithm based on deep reinforcement learning to optimize the total cost of task offloading and execution. Since the formulated problem is a mixed-integer non-linear programming (MINLP), we cast our problem as a multi-agent distributed deep reinforcement learning (DRL) problem and address it using dueling Q-learning network algorithm. Simulation results show that compared with the deep Q-learning network and the all-local cost and all-offload cost algorithms, the proposed algorithm can effectively guarantee the success rates of task offloading and execution. In addition, when the execution task volume is 200KBit, the total system cost of the proposed algorithm can be reduced by at least 1.3%, and when the execution task volume is 600KBit, the total cost of system execution tasks can be reduced by 16.7% at most.

  • Migration Model for Distributed Server Allocation

    Souhei YANASE  Fujun HE  Haruto TAKA  Akio KAWABATA  Eiji OKI  

     
    PAPER-Network Management/Operation

      Pubricized:
    2022/07/05
      Vol:
    E106-B No:1
      Page(s):
    44-56

    This paper proposes a migration model for distributed server allocation. In distributed server allocation, each user is assigned to a server to minimize the communication delay. In the conventional model, a user cannot migrate to another server to avoid instability. We develop a model where each user can migrate to another server while receiving services. We formulate the proposed model as an integer linear programming problem. We prove that the considered problem is NP-complete. We introduce a heuristic algorithm. Numerical result shows that the proposed model reduces the average communication delay by 59% compared to the conventional model at most.

  • Holmes: A Hardware-Oriented Optimizer Using Logarithms

    Yoshiharu YAMAGISHI  Tatsuya KANEKO  Megumi AKAI-KASAYA  Tetsuya ASAI  

     
    PAPER

      Pubricized:
    2022/05/11
      Vol:
    E105-D No:12
      Page(s):
    2040-2047

    Edge computing, which has been gaining attention in recent years, has many advantages, such as reducing the load on the cloud, not being affected by the communication environment, and providing excellent security. Therefore, many researchers have attempted to implement neural networks, which are representative of machine learning in edge computing. Neural networks can be divided into inference and learning parts; however, there has been little research on implementing the learning component in edge computing in contrast to the inference part. This is because learning requires more memory and computation than inference, easily exceeding the limit of resources available for edge computing. To overcome this problem, this research focuses on the optimizer, which is the heart of learning. In this paper, we introduce our new optimizer, hardware-oriented logarithmic momentum estimation (Holmes), which incorporates new perspectives not found in existing optimizers in terms of characteristics and strengths of hardware. The performance of Holmes was evaluated by comparing it with other optimizers with respect to learning progress and convergence speed. Important aspects of hardware implementation, such as memory and operation requirements are also discussed. The results show that Holmes is a good match for edge computing with relatively low resource requirements and fast learning convergence. Holmes will help create an era in which advanced machine learning can be realized on edge computing.

  • Edge Computing-Enhanced Network Redundancy Elimination for Connected Cars

    Masahiro YOSHIDA  Koya MORI  Tomohiro INOUE  Hiroyuki TANAKA  

     
    PAPER

      Pubricized:
    2022/05/27
      Vol:
    E105-B No:11
      Page(s):
    1372-1379

    Connected cars generate a huge amount of Internet of Things (IoT) sensor information called Controller Area Network (CAN) data. Recently, there is growing interest in collecting CAN data from connected cars in a cloud system to enable life-critical use cases such as safe driving support. Although each CAN data packet is very small, a connected car generates thousands of CAN data packets per second. Therefore, real-time CAN data collection from connected cars in a cloud system is one of the most challenging problems in the current IoT. In this paper, we propose an Edge computing-enhanced network Redundancy Elimination service (EdgeRE) for CAN data collection. In developing EdgeRE, we designed a CAN data compression architecture that combines in-vehicle computers, edge datacenters and a public cloud system. EdgeRE includes the idea of hierarchical data compression and dynamic data buffering at edge datacenters for real-time CAN data collection. Across a wide range of field tests with connected cars and an edge computing testbed, we show that the EdgeRE reduces bandwidth usage by 88% and the number of packets by 99%.

  • Incentive-Stable Matching Protocol for Service Chain Placement in Multi-Operator Edge System

    Jen-Yu WANG  Li-Hsing YEN  Juliana LIMAN  

     
    PAPER

      Pubricized:
    2022/05/27
      Vol:
    E105-B No:11
      Page(s):
    1353-1360

    Network Function Virtualization (NFV) enables the embedding of Virtualized Network Function (VNF) into commodity servers. A sequence of VNFs can be chained in a particular order to form a service chain (SC). This paper considers placing multiple SCs in a geo-distributed edge system owned by multiple service providers (SPs). For a pair of SC and SP, minimizing the placement cost while meeting a latency constraint is formulated as an integer programming problem. As SC clients and SPs are self-interested, we study the matching between SCs and SPs that respects individual's interests yet maximizes social welfare. The proposed matching approach excludes any blocking individual and block pair which may jeopardize the stability of the result. Simulation results show that the proposed approach performs well in terms of social welfare but is suboptimal concerning the number of placed SCs.

  • A Bus Crowdedness Sensing System Using Deep-Learning Based Object Detection

    Wenhao HUANG  Akira TSUGE  Yin CHEN  Tadashi OKOSHI  Jin NAKAZAWA  

     
    PAPER

      Pubricized:
    2022/06/23
      Vol:
    E105-D No:10
      Page(s):
    1712-1720

    Crowdedness of buses is playing an increasingly important role in the disease control of COVID-19. The lack of a practical approach to sensing the crowdedness of buses is a major problem. This paper proposes a bus crowdedness sensing system which exploits deep learning-based object detection to count the numbers of passengers getting on and off a bus and thus estimate the crowdedness of buses in real time. In our prototype system, we combine YOLOv5s object detection model with Kalman Filter object tracking algorithm to implement a sensing algorithm running on a Jetson nano-based vehicular device mounted on a bus. By using the driving recorder video data taken from real bus, we experimentally evaluate the performance of the proposed sensing system to verify that our proposed system system improves counting accuracy and achieves real-time processing at the Jetson Nano platform.

  • Design and Implementation of an Edge Computing Testbed to Simplify Experimental Environment Setup

    Hiroaki YAMANAKA  Yuuichi TERANISHI  Eiji KAWAI  Hidehisa NAGANO  Hiroaki HARAI  

     
    PAPER-Dependable Computing

      Pubricized:
    2022/05/27
      Vol:
    E105-D No:9
      Page(s):
    1516-1528

    Running IoT applications on edge computing infrastructures has the benefits of low response times and efficient bandwidth usage. System verification on a testbed is required to deploy IoT applications in production environments. In a testbed, Docker containers are preferable for a smooth transition of tested application programs to production environments. In addition, the round-trip times (RTT) of Docker containers to clients must be ensured, according to the target application's response time requirements. However, in existing testbed systems, the RTTs between Docker containers and clients are not ensured. Thus, we must undergo a large amount of configuration data including RTTs between all pairs of wireless base station nodes and servers to set up a testbed environment. In this paper, we present an edge computing testbed system with simple application programming interfaces (API) for testbed users that ensures RTTs between Docker containers and clients. The proposed system automatically determines which servers to place Docker containers on according to virtual regions and the RTTs specified by the testbed users through APIs. The virtual regions provide reduced size information about the RTTs in a network. In the proposed system, the configuration data size is reduced to one divided by the number of the servers and the command arguments length is reduced to approximately one-third or less, whereas the increased system running time is 4.3s.

  • Deep Coalitional Q-Learning for Dynamic Coalition Formation in Edge Computing

    Shiyao DING  Donghui LIN  

     
    PAPER

      Pubricized:
    2021/12/14
      Vol:
    E105-D No:5
      Page(s):
    864-872

    With the high development of computation requirements in Internet of Things, resource-limited edge servers usually require to cooperate to perform the tasks. Most related studies usually assume a static cooperation approach which might not suit the dynamic environment of edge computing. In this paper, we consider a dynamic cooperation approach by guiding edge servers to form coalitions dynamically. It raises two issues: 1) how to guide them to optimally form coalitions and 2) how to cope with the dynamic feature where server statuses dynamically change as the tasks are performed. The coalitional Markov decision process (CMDP) model proposed in our previous work can handle these issues well. However, its basic solution, coalitional Q-learning, cannot handle the large scale problem when the task number is large in edge computing. Our response is to propose a novel algorithm called deep coalitional Q-learning (DCQL) to solve it. To sum up, we first formulate the dynamic cooperation problem of edge servers as a CMDP: each edge server is regarded as an agent and the dynamic process is modeled as a MDP where the agents observe the current state to formulate several coalitions. Each coalition takes an action to impact the environment which correspondingly transfers to the next state to repeat the above process. Then, we propose DCQL which includes a deep neural network and so can well cope with large scale problem. DCQL can guide the edge servers to form coalitions dynamically with the target of optimizing some goal. Furthermore, we run experiments to verify our proposed algorithm's effectiveness in different settings.

  • Reconfigurable Neural Network Accelerator and Simulator for Model Implementation

    Yasuhiro NAKAHARA  Masato KIYAMA  Motoki AMAGASAKI  Qian ZHAO  Masahiro IIDA  

     
    PAPER

      Pubricized:
    2021/09/21
      Vol:
    E105-A No:3
      Page(s):
    448-458

    Low power consumption is important in edge artificial intelligence (AI) chips, where power supply is limited. Therefore, we propose reconfigurable neural network accelerator (ReNA), an AI chip that can process both a convolutional layer and fully connected layer with the same structure by reconfiguring the circuit. In addition, we developed tools for pre-evaluation of the performance when a deep neural network (DNN) model is implemented on ReNA. With this approach, we established the flow for the implementation of DNN models on ReNA and evaluated its power consumption. ReNA achieved 1.51TOPS/W in the convolutional layer and 1.38TOPS/W overall in a VGG16 model with a 70% pruning rate.

  • Fogcached: A DRAM/NVMM Hybrid KVS Server for Edge Computing

    Kouki OZAWA  Takahiro HIROFUCHI  Ryousei TAKANO  Midori SUGAYA  

     
    PAPER

      Pubricized:
    2021/08/18
      Vol:
    E104-D No:12
      Page(s):
    2089-2096

    With the development of IoT devices and sensors, edge computing is leading towards new services like autonomous cars and smart cities. Low-latency data access is an essential requirement for such services, and a large-capacity cache server is needed on the edge side. However, it is not realistic to build a large capacity cache server using only DRAM because DRAM is expensive and consumes substantially large power. A hybrid main memory system is promising to address this issue, in which main memory consists of DRAM and non-volatile memory. It achieves a large capacity of main memory within the power supply capabilities of current servers. In this paper, we propose Fogcached, that is, the extension of a widely-used KVS (Key-Value Store) server program (i.e., Memcached) to exploit both DRAM and non-volatile main memory (NVMM). We used Intel Optane DCPM as NVMM for its prototype. Fogcached implements a Dual-LRU (Least Recently Used) mechanism that seamlessly extends the memory management of Memcached to hybrid main memory. Fogcached reuses the segmented LRU of Memcached to manage cached objects in DRAM, adds another segmented LRU for those in DCPM and bridges the LRUs by a mechanism to automatically replace cached objects between DRAM and DCPM. Cached objects are autonomously moved between the two memory devices according to their access frequencies. Through experiments, we confirmed that Fogcached improved the peak value of a latency distribution by about 40% compared to Memcached.

  • Fogcached-Ros: DRAM/NVMM Hybrid KVS Server with ROS Based Extension for ROS Application and SLAM Evaluation

    Koki HIGASHI  Yoichi ISHIWATA  Takeshi OHKAWA  Midori SUGAYA  

     
    PAPER

      Pubricized:
    2021/08/18
      Vol:
    E104-D No:12
      Page(s):
    2097-2108

    Recently, edge servers located closer than the cloud have become expected for the purpose of processing the large amount of sensor data generated by IoT devices such as robots. Research has been proposed to improve responsiveness as a cache server by applying KVS (Key-Value Store) to the edge as a method for obtaining high responsiveness. Above all, a hybrid-KVS server that uses both DRAM and NVMM (Non-Volatile Main Memory) devices is expected to achieve both responsiveness and reliability. However, its effectiveness has not been verified in actual applications, and its effectiveness is not clear in terms of its relationship with the cloud. The purpose of this study is to evaluate the effectiveness of hybrid-KVS servers using the SLAM (Simultaneous Localization and Mapping), which is a widely used application in robots and autonomous driving. It is appropriate for applying an edge server and requires responsiveness and reliability. SLAM is generally implemented on ROS (Robot Operating System) middleware and communicates with the server through ROS middleware. However, if we use hybrid-KVS on the edge with the SLAM and ROS, the communication could not be achieved since the message objects are different from the format expected by KVS. Therefore, in this research, we propose a mechanism to apply the ROS memory object to hybrid-KVS by designing and implementing the data serialization function to extend ROS. As a result of the proposed fogcached-ros and evaluation, we confirm the effectiveness of low API overhead, support for data used by SLAM, and low latency difference between the edge and cloud.

  • Joint Wireless and Computational Resource Allocation Based on Hierarchical Game for Mobile Edge Computing

    Weiwei XIA  Zhuorui LAN  Lianfeng SHEN  

     
    PAPER-Network

      Pubricized:
    2021/05/14
      Vol:
    E104-B No:11
      Page(s):
    1395-1407

    In this paper, we propose a hierarchical Stackelberg game based resource allocation algorithm (HGRAA) to jointly allocate the wireless and computational resources of a mobile edge computing (MEC) system. The proposed HGRAA is composed of two levels: the lower-level evolutionary game (LEG) minimizes the cost of mobile terminals (MTs), and the upper-level exact potential game (UEPG) maximizes the utility of MEC servers. At the lower-level, the MTs are divided into delay-sensitive MTs (DSMTs) and non-delay-sensitive MTs (NDSMTs) according to their different quality of service (QoS) requirements. The competition among DSMTs and NDSMTs in different service areas to share the limited available wireless and computational resources is formulated as a dynamic evolutionary game. The dynamic replicator is applied to obtain the evolutionary equilibrium so as to minimize the costs imposed on MTs. At the upper level, the exact potential game is formulated to solve the resource sharing problem among MEC servers and the resource sharing problem is transferred to nonlinear complementarity. The existence of Nash equilibrium (NE) is proved and is obtained through the Karush-Kuhn-Tucker (KKT) condition. Simulations illustrate that substantial performance improvements such as average utility and the resource utilization of MEC servers can be achieved by applying the proposed HGRAA. Moreover, the cost of MTs is significantly lower than other existing algorithms with the increasing size of input data, and the QoS requirements of different kinds of MTs are well guaranteed in terms of average delay and transmission data rate.

  • An Optimistic Synchronization Based Optimal Server Selection Scheme for Delay Sensitive Communication Services Open Access

    Akio KAWABATA  Bijoy Chand CHATTERJEE  Eiji OKI  

     
    PAPER-Network System

      Pubricized:
    2021/04/09
      Vol:
    E104-B No:10
      Page(s):
    1277-1287

    In distributed processing for communication services, a proper server selection scheme is required to reduce delay by ensuring the event occurrence order. Although a conservative synchronization algorithm (CSA) has been used to achieve this goal, an optimistic synchronization algorithm (OSA) can be feasible for synchronizing distributed systems. In comparison with CSA, which reproduces events in occurrence order before processing applications, OSA can be feasible to realize low delay communication as the processing events arrive sequentially. This paper proposes an optimal server selection scheme that uses OSA for distributed processing systems to minimize end-to-end delay under the condition that maximum status holding time is limited. In other words, the end-to-end delay is minimized based on the allowed rollback time, which is given according to the application designing aspects and availability of computing resources. Numerical results indicate that the proposed scheme reduces the delay compared to the conventional scheme.

  • Service Migration Scheduling with Bandwidth Limitation against Crowd Mobility in Edge Computing Environments

    Hiroaki YAMANAKA  Yuuichi TERANISHI  Eiji KAWAI  

     
    PAPER-Network

      Pubricized:
    2020/09/11
      Vol:
    E104-B No:3
      Page(s):
    240-250

    Edge computing offers computing capability with ultra-low response times by leveraging servers close to end-user devices. Due to the mobility of end-user devices, the latency between the servers and the end-user devices can become long and the response time might become unacceptable for an application service. Service (container) migration that follows the handover of end-user devices retains the response time. Service migration following the mass movement of people in the same geographic area and at the same time due to an event (e.g., commuting) generates heavy bandwidth usage in the mobile backhaul network. Heavy usage by service migration reduces available bandwidth for ordinary application traffic in the network. Shaping the migration traffic limits the bandwidth usage while delaying service migration and increasing the response time of the container for the moving end-user device. Furthermore, targets of migration decisions increase (i.e., the system load) because delaying a migration process accumulates containers waiting for migration. In this paper, we propose a migration scheduling method to control bandwidth usage for migration in a network and ensure timely processing of service migration. Simulations that compare the proposal with state-of-the-art methods show that the proposal always suppresses the bandwidth usage under the predetermined threshold. The method reduced the number of containers exceeding the acceptable response time up to 40% of the compared state-of-the-art methods. Furthermore, the proposed method minimized the targets of migration decisions.

1-20hit(37hit)