The search functionality is under construction.
The search functionality is under construction.

Keyword Search Result

[Keyword] time(2217hit)

1-20hit(2217hit)

  • MDX-Mixer: Music Demixing by Leveraging Source Signals Separated by Existing Demixing Models Open Access

    Tomoyasu NAKANO  Masataka GOTO  

     
    PAPER-Music Information Processing

      Pubricized:
    2024/04/05
      Vol:
    E107-D No:8
      Page(s):
    1079-1088

    This paper presents MDX-Mixer, which improves music demixing (MDX) performance by leveraging source signals separated by multiple existing MDX models. Deep-learning-based MDX models have improved their separation performances year by year for four kinds of sound sources: “vocals,” “drums,” “bass,” and “other”. Our research question is whether mixing (i.e., weighted sum) the signals separated by state-of-the-art MDX models can obtain either the best of everything or higher separation performance. Previously, in singing voice separation and MDX, there have been studies in which separated signals of the same sound source are mixed with each other using time-invariant or time-varying positive mixing weights. In contrast to those, this study is novel in that it allows for negative weights as well and performs time-varying mixing using all of the separated source signals and the music acoustic signal before separation. The time-varying weights are estimated by modeling the music acoustic signals and their separated signals by dividing them into short segments. In this paper we propose two new systems: one that estimates time-invariant weights using 1×1 convolution, and one that estimates time-varying weights by applying the MLP-Mixer layer proposed in the computer vision field to each segment. The latter model is called MDX-Mixer. Their performances were evaluated based on the source-to-distortion ratio (SDR) using the well-known MUSDB18-HQ dataset. The results show that the MDX-Mixer achieved higher SDR than the separated signals given by three state-of-the-art MDX models.

  • Method for Estimating Scatterer Information from the Response Waveform of a Backward Transient Scattering Field Using TD-SPT Open Access

    Keiji GOTO  Toru KAWANO  Munetoshi IWAKIRI  Tsubasa KAWAKAMI  Kazuki NAKAZAWA  

     
    PAPER-Electromagnetic Theory

      Pubricized:
    2024/01/23
      Vol:
    E107-C No:8
      Page(s):
    210-222

    This paper proposes a scatterer information estimation method using numerical data for the response waveform of a backward transient scattering field for both E- and H-polarizations when a two-dimensional (2-D) coated metal cylinder is selected as a scatterer. It is assumed that a line source and an observation point are placed at different locations. The four types of scatterer information covered in this paper are the relative permittivity of a surrounding medium, the relative permittivity of a coating medium layer and its thickness, and the radius of a coated metal cylinder. Specifically, a time-domain saddle-point technique (TD-SPT) is used to derive scatterer information estimation formulae from the amplitude intensity ratios (AIRs) of adjacent backward transient scattering field components. The estimates are obtained by substituting the numerical data of the response waveforms of the backward transient scattering field components into the estimation formulae and performing iterative calculations. Furthermore, a minimum thickness of a coating medium layer for which the estimation method is valid is derived, and two kinds of applicable conditions for the estimation method are proposed. The effectiveness of the scatterer information estimation method is verified by comparing the estimates with the set values. The noise tolerance and convergence characteristics of the estimation method and the method of controlling the estimation accuracy are also discussed.

  • Power Peak Load Forecasting Based on Deep Time Series Analysis Method Open Access

    Ying-Chang HUNG  Duen-Ren LIU  

     
    PAPER-Artificial Intelligence, Data Mining

      Pubricized:
    2024/03/21
      Vol:
    E107-D No:7
      Page(s):
    845-856

    The prediction of peak power load is a critical factor directly impacting the stability of power supply, characterized significantly by its time series nature and intricate ties to the seasonal patterns in electricity usage. Despite its crucial importance, the current landscape of power peak load forecasting remains a multifaceted challenge in the field. This study aims to contribute to this domain by proposing a method that leverages a combination of three primary models - the GRU model, self-attention mechanism, and Transformer mechanism - to forecast peak power load. To contextualize this research within the ongoing discourse, it’s essential to consider the evolving methodologies and advancements in power peak load forecasting. By delving into additional references addressing the complexities and current state of the power peak load forecasting problem, this study aims to build upon the existing knowledge base and offer insights into contemporary challenges and strategies adopted within the field. Data preprocessing in this study involves comprehensive cleaning, standardization, and the design of relevant functions to ensure robustness in the predictive modeling process. Additionally, recognizing the necessity to capture temporal changes effectively, this research incorporates features such as “Weekly Moving Average” and “Monthly Moving Average” into the dataset. To evaluate the proposed methodologies comprehensively, this study conducts comparative analyses with established models such as LSTM, Self-attention network, Transformer, ARIMA, and SVR. The outcomes reveal that the models proposed in this study exhibit superior predictive performance compared to these established models, showcasing their effectiveness in accurately forecasting electricity consumption. The significance of this research lies in two primary contributions. Firstly, it introduces an innovative prediction method combining the GRU model, self-attention mechanism, and Transformer mechanism, aligning with the contemporary evolution of predictive modeling techniques in the field. Secondly, it introduces and emphasizes the utility of “Weekly Moving Average” and “Monthly Moving Average” methodologies, crucial in effectively capturing and interpreting seasonal variations within the dataset. By incorporating these features, this study enhances the model’s ability to account for seasonal influencing factors, thereby significantly improving the accuracy of peak power load forecasting. This contribution aligns with the ongoing efforts to refine forecasting methodologies and addresses the pertinent challenges within power peak load forecasting.

  • Cloud-Edge-Device Collaborative High Concurrency Access Management for Massive IoT Devices in Distribution Grid Open Access

    Shuai LI  Xinhong YOU  Shidong ZHANG  Mu FANG  Pengping ZHANG  

     
    PAPER-Systems and Control

      Pubricized:
    2023/10/26
      Vol:
    E107-A No:7
      Page(s):
    946-957

    Emerging data-intensive services in distribution grid impose requirements of high-concurrency access for massive internet of things (IoT) devices. However, the lack of effective high-concurrency access management results in severe performance degradation. To address this challenge, we propose a cloud-edge-device collaborative high-concurrency access management algorithm based on multi-timescale joint optimization of channel pre-allocation and load balancing degree. We formulate an optimization problem to minimize the weighted sum of edge-cloud load balancing degree and queuing delay under the constraint of access success rate. The problem is decomposed into a large-timescale channel pre-allocation subproblem solved by the device-edge collaborative access priority scoring mechanism, and a small-timescale data access control subproblem solved by the discounted empirical matching mechanism (DEM) with the perception of high-concurrency number and queue backlog. Particularly, information uncertainty caused by externalities is tackled by exploiting discounted empirical performance which accurately captures the performance influence of historical time points on present preference value. Simulation results demonstrate the effectiveness of the proposed algorithm in reducing edge-cloud load balancing degree and queuing delay.

  • Federated Deep Reinforcement Learning for Multimedia Task Offloading and Resource Allocation in MEC Networks Open Access

    Rongqi ZHANG  Chunyun PAN  Yafei WANG  Yuanyuan YAO  Xuehua LI  

     
    PAPER-Network

      Vol:
    E107-B No:6
      Page(s):
    446-457

    With maturation of 5G technology in recent years, multimedia services such as live video streaming and online games on the Internet have flourished. These multimedia services frequently require low latency, which pose a significant challenge to compute the high latency requirements multimedia tasks. Mobile edge computing (MEC), is considered a key technology solution to address the above challenges. It offloads computation-intensive tasks to edge servers by sinking mobile nodes, which reduces task execution latency and relieves computing pressure on multimedia devices. In order to use MEC paradigm reasonably and efficiently, resource allocation has become a new challenge. In this paper, we focus on the multimedia tasks which need to be uploaded and processed in the network. We set the optimization problem with the goal of minimizing the latency and energy consumption required to perform tasks in multimedia devices. To solve the complex and non-convex problem, we formulate the optimization problem as a distributed deep reinforcement learning (DRL) problem and propose a federated Dueling deep Q-network (DDQN) based multimedia task offloading and resource allocation algorithm (FDRL-DDQN). In the algorithm, DRL is trained on the local device, while federated learning (FL) is responsible for aggregating and updating the parameters from the trained local models. Further, in order to solve the not identically and independently distributed (non-IID) data problem of multimedia devices, we develop a method for selecting participating federated devices. The simulation results show that the FDRL-DDQN algorithm can reduce the total cost by 31.3% compared to the DQN algorithm when the task data is 1000 kbit, and the maximum reduction can be 35.3% compared to the traditional baseline algorithm.

  • Secrecy Outage Probability and Secrecy Diversity Order of Alamouti STBC with Decision Feedback Detection over Time-Selective Fading Channels Open Access

    Gyulim KIM  Hoojin LEE  Xinrong LI  Seong Ho CHAE  

     
    LETTER-Communication Theory and Signals

      Pubricized:
    2023/09/19
      Vol:
    E107-A No:6
      Page(s):
    923-927

    This letter studies the secrecy outage probability (SOP) and the secrecy diversity order of Alamouti STBC with decision feedback (DF) detection over the time-selective fading channels. For given temporal correlations, we have derived the exact SOPs and their asymptotic approximations for all possible combinations of detection schemes including joint maximum likehood (JML), zero-forcing (ZF), and DF at Bob and Eve. We reveal that the SOP is mainly influenced by the detection scheme of the legitimate receiver rather than eavesdropper and the achievable secrecy diversity order converges to two and one for JML only at Bob (i.e., JML-JML/ZF/DF) and for the other cases (i.e., ZF-JML/ZF/DF, DF-JML/ZF/DF), respectively. Here, p-q combination pair indicates that Bob and Eve adopt the detection method p ∈ {JML, ZF, DF} and q ∈ {JML, ZF, DF}, respectively.

  • Finformer: Fast Incremental and General Time Series Data Prediction Open Access

    Savong BOU  Toshiyuki AMAGASA  Hiroyuki KITAGAWA  

     
    PAPER

      Pubricized:
    2024/01/09
      Vol:
    E107-D No:5
      Page(s):
    625-637

    Forecasting time-series data is useful in many fields, such as stock price predicting system, autonomous driving system, weather forecast, etc. Many existing forecasting models tend to work well when forecasting short-sequence time series. However, when working with long sequence time series, the performance suffers significantly. Recently, there has been more intense research in this direction, and Informer is currently the most efficient predicting model. Informer’s main drawback is that it does not allow for incremental learning. In this paper, we propose a Fast Informer called Finformer, which addresses the above bottleneck by reducing the training/predicting time of Informer. Finformer can efficiently compute the positional/temporal/value embedding and Query/Key/Value of the self-attention incrementally. Theoretically, Finformer can improve the speed of both training and predicting over the state-of-the-art model Informer. Extensive experiments show that Finformer is about 26% faster than Informer for both short and long sequence time series prediction. In addition, Finformer is about 20% faster than InTrans for the general Conv1d, which is one of our previous works and is the predecessor of Finformer.

  • A BDD-Based Approach to Finite-Time Control of Boolean Networks Open Access

    Fuma MOTOYAMA  Koichi KOBAYASHI  Yuh YAMASHITA  

     
    PAPER

      Pubricized:
    2023/10/23
      Vol:
    E107-A No:5
      Page(s):
    793-798

    Control of complex networks such as gene regulatory networks is one of the fundamental problems in control theory. A Boolean network (BN) is one of the mathematical models in complex networks, and represents the dynamic behavior by Boolean functions. In this paper, a solution method for the finite-time control problem of BNs is proposed using a BDD (binary decision diagram). In this problem, we find all combinations of the initial state and the control input sequence such that a certain control specification is satisfied. The use of BDDs enables us to solve this problem for BNs such that the conventional method cannot be applied. First, after the outline of BNs and BDDs is explained, the problem studied in this paper is given. Next, a solution method using BDDs is proposed. Finally, a numerical example on a 67-node BN is presented.

  • Grid Sample Based Temporal Iteration for Fully Pipelined 1-ms SLIC Superpixel Segmentation System Open Access

    Yuan LI  Tingting HU  Ryuji FUCHIKAMI  Takeshi IKENAGA  

     
    PAPER-Computer System

      Pubricized:
    2023/12/19
      Vol:
    E107-D No:4
      Page(s):
    515-524

    A 1 millisecond (1-ms) vision system, which processes videos at 1000 frames per second (FPS) within 1 ms/frame delay, plays an increasingly important role in fields such as robotics and factory automation. Superpixel as one of the most extensively employed image oversegmentation methods is a crucial pre-processing step for reducing computations in various computer vision applications. Among the different superpixel methods, simple linear iterative clustering (SLIC) has gained widespread adoption due to its simplicity, effectiveness, and computational efficiency. However, the iterative assignment and update steps in SLIC make it challenging to achieve high processing speed. To address this limitation and develop a SLIC superpixel segmentation system with a 1 ms delay, this paper proposes grid sample based temporal iteration. By leveraging the high frame rate of the input video, the proposed method distributes the iterations into the temporal domain, ensuring that the system's delay keeps within one frame. Additionally, grid sample information is added as initialization information to the obtained superpixel centers for enhancing the stability of superpixels. Furthermore, a selective label propagation based pipeline architecture is proposed for parallel computation of all the possibilities of label propagation. This eliminates data dependency between adjacent pixels and enables a fully pipelined system. The evaluation results demonstrate that the proposed superpixel segmentation system achieves boundary recall and under-segmentation error comparable to the original SLIC algorithm. When considering label consistency, the proposed system surpasses the performance of state-of-the-art superpixel segmentation methods. Moreover, in terms of hardware performance, the proposed system processes 1000 FPS images with 0.985 ms/frame delay.

  • Long Short-Team Memory for Forecasting Degradation Recovery Process with Binary Maintenance Intervention Records Open Access

    Katsuya KOSUKEGAWA  Kazuhiko KAWAMOTO  

     
    LETTER-Nonlinear Problems

      Pubricized:
    2023/08/07
      Vol:
    E107-A No:4
      Page(s):
    666-669

    We considered the problem of forecasting the degradation recovery process of civil structures for prognosis and health management. In this process, structural health degrades over time but recovers when a maintenance intervention is performed. Maintenance interventions are typically recorded in terms of date and type. Such records can be represented as binary time series. Using binary maintenance intervention records, we forecast the process by using Long Short-Term Memory (LSTM). In this study, we experimentally examined how to feed binary time series data into LSTM. To this end, we compared the concatenation and reinitialization methods. The former is used to concatenate maintenance intervention records and health data and feed them into LSTM. The latter is used to reinitialize the LSTM internal memory when maintenance intervention is performed. The experimental results with the synthetic data revealed that the concatenation method outperformed the reinitialization method.

  • A Novel Anomaly Detection Framework Based on Model Serialization

    Byeongtae PARK  Dong-Kyu CHAE  

     
    LETTER-Artificial Intelligence, Data Mining

      Pubricized:
    2023/11/21
      Vol:
    E107-D No:3
      Page(s):
    420-423

    Recently, multivariate time-series data has been generated in various environments, such as sensor networks and IoT, making anomaly detection in time-series data an essential research topic. Unsupervised learning anomaly detectors identify anomalies by training a model on normal data and producing high residuals for abnormal observations. However, a fundamental issue arises as anomalies do not consistently result in high residuals, necessitating a focus on the time-series patterns of residuals rather than individual residual sizes. In this paper, we present a novel framework comprising two serialized anomaly detectors: the first model calculates residuals as usual, while the second one evaluates the time-series pattern of the computed residuals to determine whether they are normal or abnormal. Experiments conducted on real-world time-series data demonstrate the effectiveness of our proposed framework.

  • Online Job Scheduling with K Servers

    Xuanke JIANG  Sherief HASHIMA  Kohei HATANO  Eiji TAKIMOTO  

     
    PAPER

      Pubricized:
    2023/11/15
      Vol:
    E107-D No:3
      Page(s):
    286-293

    In this paper, we investigate an online job scheduling problem with n jobs and k servers, where the accessibilities between the jobs and the servers are given as a bipartite graph. The scheduler is tasked with minimizing the regret, defined as the difference between the total flow time of the scheduler over T rounds and that of the best-fixed scheduling in hindsight. We propose an algorithm whose regret bounds are $O(n^2 sqrt{Tln (nk)})$ for general bipartite graphs, $O((n^2/k^{1/2}) sqrt{Tln (nk)})$ for the complete bipartite graphs, and $O((n^2/k) sqrt{T ln (nk)}$ for the disjoint star graphs, respectively. We also give a lower regret bound of $Omega((n^2/k) sqrt{T})$ for the disjoint star graphs, implying that our regret bounds are almost optimal.

  • CMND: Consistent-Aware Multi-Server Network Design Model for Delay-Sensitive Applications

    Akio KAWABATA  Bijoy CHAND CHATTERJEE  Eiji OKI  

     
    PAPER-Network System

      Vol:
    E107-B No:3
      Page(s):
    321-329

    This paper proposes a network design model, considering data consistency for a delay-sensitive distributed processing system. The data consistency is determined by collating the own state and the states of slave servers. If the state is mismatched with other servers, the rollback process is initiated to modify the state to guarantee data consistency. In the proposed model, the selected servers and the master-slave server pairs are determined to minimize the end-to-end delay and the delay for data consistency. We formulate the proposed model as an integer linear programming problem. We evaluate the delay performance and computation time. We evaluate the proposed model in two network models with two, three, and four slave servers. The proposed model reduces the delay for data consistency by up to 31 percent compared to that of a typical model that collates the status of all servers at one master server. The computation time is a few seconds, which is an acceptable time for network design before service launch. These results indicate that the proposed model is effective for delay-sensitive applications.

  • Understanding File System Operations of a Secure Container Runtime Using System Call Tracing Technique

    Sunwoo JANG  Young-Kyoon SUH  Byungchul TAK  

     
    LETTER-Software System

      Pubricized:
    2023/11/01
      Vol:
    E107-D No:2
      Page(s):
    229-233

    This letter presents a technique that observes system call mapping behavior of the proxy kernel layer of secure container runtimes. We applied it to file system operations of a secure container runtime, gVisor. We found that gVisor's operations can become more expensive than the native by 48× more syscalls for open, and 6× for read and write.

  • Semantic Relationship-Based Unsupervised Representation Learning of Multivariate Time Series

    Chengyang YE  Qiang MA  

     
    PAPER-Artificial Intelligence, Data Mining

      Pubricized:
    2023/11/16
      Vol:
    E107-D No:2
      Page(s):
    191-200

    Representation learning is a crucial and complex task for multivariate time series data analysis, with a wide range of applications including trend analysis, time series data search, and forecasting. In practice, unsupervised learning is strongly preferred owing to sparse labeling. However, most existing studies focus on the representation of individual subseries without considering relationships between different subseries. In certain scenarios, this may lead to downstream task failures. Here, an unsupervised representation learning model is proposed for multivariate time series that considers the semantic relationship among subseries of time series. Specifically, the covariance calculated by the Gaussian process (GP) is introduced to the self-attention mechanism, capturing relationship features of the subseries. Additionally, a novel unsupervised method is designed to learn the representation of multivariate time series. To address the challenges of variable lengths of input subseries, a temporal pyramid pooling (TPP) method is applied to construct input vectors with equal length. The experimental results show that our model has substantial advantages compared with other representation learning models. We conducted experiments on the proposed algorithm and baseline algorithms in two downstream tasks: classification and retrieval. In classification task, the proposed model demonstrated the best performance on seven of ten datasets, achieving an average accuracy of 76%. In retrieval task, the proposed algorithm achieved the best performance under different datasets and hidden sizes. The result of ablation study also demonstrates significance of semantic relationship in multivariate time series representation learning.

  • Virtualizing DVFS for Energy Minimization of Embedded Dual-OS Platform

    Takumi KOMORI  Yutaka MASUDA  Tohru ISHIHARA  

     
    PAPER

      Pubricized:
    2023/07/12
      Vol:
    E107-A No:1
      Page(s):
    3-15

    Recent embedded systems require both traditional machinery control and information processing, such as network and GUI handling. A dual-OS platform consolidates a real-time OS (RTOS) and general-purpose OS (GPOS) to realize efficient software development on one physical processor. Although the dual-OS platform attracts increasing attention, it often suffers from energy inefficiency in the GPOS for guaranteeing real-time responses of the RTOS. This paper proposes an energy minimization method called DVFS virtualization, which allows running multiple DVFS policies dedicated to the RTOS and GPOS, respectively. The experimental evaluation using a commercial microcontroller showed that the proposed hardware could change the supply voltage within 500 ns and reduce the energy consumption of typical applications by 60 % in the best case compared to conventional dual-OS platforms. Furthermore, evaluation using a commercial microprocessor achieved a 15 % energy reduction of practical open-source software at best.

  • Feasibility Study of Numerical Calculation and Machine Learning Hybrid Approach for Renal Denervation Temperature Prediction

    Aditya RAKHMADI  Kazuyuki SAITO  

     
    PAPER-Electromagnetic Theory

      Pubricized:
    2023/05/22
      Vol:
    E106-C No:12
      Page(s):
    799-807

    Transcatheter renal denervation (RDN) is a novel treatment to reduce blood pressure in patients with resistant hypertension using an energy-based catheter, mostly radio frequency (RF) current, by eliminating renal sympathetic nerve. However, several inconsistent RDN treatments were reported, mainly due to RF current narrow heating area, and the inability to confirm a successful nerve ablation in a deep area. We proposed microwave energy as an alternative for creating a wider ablation area. However, confirming a successful ablation is still a problem. In this paper, we designed a prediction method for deep renal nerve ablation sites using hybrid numerical calculation-driven machine learning (ML) in combination with a microwave catheter. This work is a first-step investigation to check the hybrid ML prediction capability in a real-world situation. A catheter with a single-slot coaxial antenna at 2.45 GHz with a balloon catheter, combined with a thin thermometer probe on the balloon surface, is proposed. Lumen temperature measured by the probe is used as an ML input to predict the temperature rise at the ablation site. Heating experiments using 6 and 8 mm hole phantom with a 41.3 W excited power, and 8 mm with 36.4 W excited power, were done eight times each to check the feasibility and accuracy of the ML algorithm. In addition, the temperature on the ablation site is measured for reference. Prediction by ML algorithm agrees well with the reference, with a maximum difference of 6°C and 3°C in 6 and 8 mm (both power), respectively. Overall, the proposed ML algorithm is capable of predicting the ablation site temperature rise with high accuracy.

  • Minimization of Energy Consumption in TDMA-Based Wireless-Powered Multi-Access Edge Computing Networks

    Xi CHEN  Guodong JIANG  Kaikai CHI  Shubin ZHANG  Gang CHEN  Jiang LIU  

     
    PAPER-Communication Theory and Signals

      Pubricized:
    2023/06/19
      Vol:
    E106-A No:12
      Page(s):
    1544-1554

    Many nodes in Internet of Things (IoT) rely on batteries for power. Additionally, the demand for executing compute-intensive and latency-sensitive tasks is increasing for IoT nodes. In some practical scenarios, the computation tasks of WDs have the non-separable characteristic, that is, binary offloading strategies should be used. In this paper, we focus on the design of an efficient binary offloading algorithm that minimizes system energy consumption (EC) for TDMA-based wireless-powered multi-access edge computing networks, where WDs either compute tasks locally or offload them to hybrid access points (H-APs). We formulate the EC minimization problem which is a non-convex problem and decompose it into a master problem optimizing binary offloading decision and a subproblem optimizing WPT duration and task offloading transmission durations. For the master problem, a DRL based method is applied to obtain the near-optimal offloading decision. For the subproblem, we firstly consider the scenario where the nodes do not have completion time constraints and obtain the optimal analytical solution. Then we consider the scenario with the constraints. By jointly using the Golden Section Method and bisection method, the optimal solution can be obtained due to the convexity of the constraint function. Simulation results show that the proposed offloading algorithm based on DRL can achieve the near-minimal EC.

  • Time-Frequency Characteristics of Ionospheric Clutter in High Frequency Surface Wave Radar during Typhoon Muifa

    Xiaolong ZHENG  Bangjie LI  Daqiao ZHANG  Di YAO  Xuguang YANG  

     
    LETTER-Digital Signal Processing

      Pubricized:
    2023/04/18
      Vol:
    E106-A No:10
      Page(s):
    1358-1361

    The ionospheric clutter in High Frequency Surface Wave Radar (HFSWR) is the reflection of electromagnetic waves from the ionosphere back to the receiver, which should be suppressed as much as possible for the primary purpose of target detection in HFSWR. However, ionospheric clutter contains vast quantities of ionospheric state information. By studying ionospheric clutter, some of the relevant ionospheric parameters can be inferred, especially during the period of typhoons, when the ionospheric state changes drastically affected by typhoon-excited gravity waves, and utilizing the time-frequency characteristics of ionospheric clutter at typhoon time, information such as the trend of electron concentration changes in the ionosphere and the direction of the typhoon can be obtained. The results of the processing of the radar data showed the effectiveness of this method.

  • Theoretical Analysis of Fully Wireless-Power-Transfer Node Networks Open Access

    Hiroshi SAITO  

     
    PAPER-Fundamental Theories for Communications

      Pubricized:
    2023/05/10
      Vol:
    E106-B No:10
      Page(s):
    864-872

    The performance of a fully wireless-power-transfer (WPT) node network, in which each node transfers (and receives) energy through a wireless channel when it has sufficient (and insufficient) energy in its battery, was theoretically analyzed. The lost job ratio (LJR), namely, is the ratio of (i) the amount of jobs that cannot be done due to battery of a node running out to (ii) the amount of jobs that should be done, is used as a performance metric. It describes the effect of the battery of each node running out and how much additional energy is needed. Although it is known that WPT can reduce the probability of the battery running out among a few nodes within a small area, the performance of a fully WPT network has not been clarified. By using stochastic geometry and first-passage-time analysis for a diffusion process, the expected LJR was theoretically derived. Numerical examples demonstrate that the key parameters determining the performance of the network are node density, threshold switching of statuses between “transferring energy” and “receiving energy,” and the parameters of power conversion. They also demonstrate the followings: (1) The mean energy stored in the node battery decreases in the networks because of the loss caused by WPT, and a fully WPT network cannot decrease the probability of the battery running out under the current WPT efficiency. (2) When the saturation value of power conversion increases, a fully WPT network can decrease the probability of the battery running out although the mean energy stored in the node battery still decreases in the networks. This result is explained by the fact that the variance of stored energy in each node battery becomes smaller due to transfer of energy from nodes of sufficient energy to nodes of insufficient energy.

1-20hit(2217hit)