The search functionality is under construction.

Keyword Search Result

[Keyword] time series forecasting(3hit)

1-3hit
  • Long Short-Team Memory for Forecasting Degradation Recovery Process with Binary Maintenance Intervention Records Open Access

    Katsuya KOSUKEGAWA  Kazuhiko KAWAMOTO  

     
    LETTER-Nonlinear Problems

      Pubricized:
    2023/08/07
      Vol:
    E107-A No:4
      Page(s):
    666-669

    We considered the problem of forecasting the degradation recovery process of civil structures for prognosis and health management. In this process, structural health degrades over time but recovers when a maintenance intervention is performed. Maintenance interventions are typically recorded in terms of date and type. Such records can be represented as binary time series. Using binary maintenance intervention records, we forecast the process by using Long Short-Term Memory (LSTM). In this study, we experimentally examined how to feed binary time series data into LSTM. To this end, we compared the concatenation and reinitialization methods. The former is used to concatenate maintenance intervention records and health data and feed them into LSTM. The latter is used to reinitialize the LSTM internal memory when maintenance intervention is performed. The experimental results with the synthetic data revealed that the concatenation method outperformed the reinitialization method.

  • Time Series Forecasting Based on Convolution Transformer

    Na WANG  Xianglian ZHAO  

     
    PAPER-Fundamentals of Information Systems

      Pubricized:
    2023/02/15
      Vol:
    E106-D No:5
      Page(s):
    976-985

    For many fields in real life, time series forecasting is essential. Recent studies have shown that Transformer has certain advantages when dealing with such problems, especially when dealing with long sequence time input and long sequence time forecasting problems. In order to improve the efficiency and local stability of Transformer, these studies combine Transformer and CNN with different structures. However, previous time series forecasting network models based on Transformer cannot make full use of CNN, and they have not been used in a better combination of both. In response to this problem in time series forecasting, we propose the time series forecasting algorithm based on convolution Transformer. (1) ES attention mechanism: Combine external attention with traditional self-attention mechanism through the two-branch network, the computational cost of self-attention mechanism is reduced, and the higher forecasting accuracy is obtained. (2) Frequency enhanced block: A Frequency Enhanced Block is added in front of the ESAttention module, which can capture important structures in time series through frequency domain mapping. (3) Causal dilated convolution: The self-attention mechanism module is connected by replacing the traditional standard convolution layer with a causal dilated convolution layer, so that it obtains the receptive field of exponentially growth without increasing the calculation consumption. (4) Multi-layer feature fusion: The outputs of different self-attention mechanism modules are extracted, and the convolutional layers are used to adjust the size of the feature map for the fusion. The more fine-grained feature information is obtained at negligible computational cost. Experiments on real world datasets show that the time series network forecasting model structure proposed in this paper can greatly improve the real-time forecasting performance of the current state-of-the-art Transformer model, and the calculation and memory costs are significantly lower. Compared with previous algorithms, the proposed algorithm has achieved a greater performance improvement in both effectiveness and forecasting accuracy.

  • An Online Self-Constructive Normalized Gaussian Network with Localized Forgetting

    Jana BACKHUS  Ichigaku TAKIGAWA  Hideyuki IMAI  Mineichi KUDO  Masanori SUGIMOTO  

     
    PAPER-Neural Networks and Bioengineering

      Vol:
    E100-A No:3
      Page(s):
    865-876

    In this paper, we introduce a self-constructive Normalized Gaussian Network (NGnet) for online learning tasks. In online tasks, data samples are received sequentially, and domain knowledge is often limited. Then, we need to employ learning methods to the NGnet that possess robust performance and dynamically select an accurate model size. We revise a previously proposed localized forgetting approach for the NGnet and adapt some unit manipulation mechanisms to it for dynamic model selection. The mechanisms are improved for more robustness in negative interference prone environments, and a new merge manipulation is considered to deal with model redundancies. The effectiveness of the proposed method is compared with the previous localized forgetting approach and an established learning method for the NGnet. Several experiments are conducted for a function approximation and chaotic time series forecasting task. The proposed approach possesses robust and favorable performance in different learning situations over all testbeds.