The search functionality is under construction.

Keyword Search Result

[Keyword] incremental learning(12hit)

1-12hit
  • Finformer: Fast Incremental and General Time Series Data Prediction Open Access

    Savong BOU  Toshiyuki AMAGASA  Hiroyuki KITAGAWA  

     
    PAPER

      Pubricized:
    2024/01/09
      Vol:
    E107-D No:5
      Page(s):
    625-637

    Forecasting time-series data is useful in many fields, such as stock price predicting system, autonomous driving system, weather forecast, etc. Many existing forecasting models tend to work well when forecasting short-sequence time series. However, when working with long sequence time series, the performance suffers significantly. Recently, there has been more intense research in this direction, and Informer is currently the most efficient predicting model. Informer’s main drawback is that it does not allow for incremental learning. In this paper, we propose a Fast Informer called Finformer, which addresses the above bottleneck by reducing the training/predicting time of Informer. Finformer can efficiently compute the positional/temporal/value embedding and Query/Key/Value of the self-attention incrementally. Theoretically, Finformer can improve the speed of both training and predicting over the state-of-the-art model Informer. Extensive experiments show that Finformer is about 26% faster than Informer for both short and long sequence time series prediction. In addition, Finformer is about 20% faster than InTrans for the general Conv1d, which is one of our previous works and is the predecessor of Finformer.

  • Data Augmented Incremental Learning (DAIL) for Unsupervised Data

    Sathya MADHUSUDHANAN  Suresh JAGANATHAN  

     
    PAPER-Artificial Intelligence, Data Mining

      Pubricized:
    2022/03/14
      Vol:
    E105-D No:6
      Page(s):
    1185-1195

    Incremental Learning, a machine learning methodology, trains the continuously arriving input data and extends the model's knowledge. When it comes to unlabeled data streams, incremental learning task becomes more challenging. Our newly proposed incremental learning methodology, Data Augmented Incremental Learning (DAIL), learns the ever-increasing real-time streams with reduced memory resources and time. Initially, the unlabeled batches of data streams are clustered using the proposed clustering algorithm, Clustering based on Autoencoder and Gaussian Model (CLAG). Later, DAIL creates an updated incremental model for the labelled clusters using data augmentation. DAIL avoids the retraining of old samples and retains only the most recently updated incremental model holding all old class information. The use of data augmentation in DAIL combines the similar clusters generated with different data batches. A series of experiments verified the significant performance of CLAG and DAIL, producing scalable and efficient incremental model.

  • Polarity Classification of Social Media Feeds Using Incremental Learning — A Deep Learning Approach

    Suresh JAGANATHAN  Sathya MADHUSUDHANAN  

     
    PAPER-Neural Networks and Bioengineering

      Pubricized:
    2021/09/15
      Vol:
    E105-A No:3
      Page(s):
    584-593

    Online feeds are streamed continuously in batches with varied polarities at varying times. The system handling the online feeds must be trained to classify all the varying polarities occurring dynamically. The polarity classification system designed for the online feeds must address two significant challenges: i) stability-plasticity, ii) category-proliferation. The challenges faced in the polarity classification of online feeds can be addressed using the technique of incremental learning, which serves to learn new classes dynamically and also retains the previously learned knowledge. This paper proposes a new incremental learning methodology, ILOF (Incremental Learning of Online Feeds) to classify the feeds by adopting Deep Learning Techniques such as RNN (Recurrent Neural Networks) and LSTM (Long Short Term Memory) and also ELM (Extreme Learning Machine) for addressing the above stated problems. The proposed method creates a separate model for each batch using ELM and incrementally learns from the trained batches. The training of each batch avoids the retraining of old feeds, thus saving training time and memory space. The trained feeds can be discarded when new batch of feeds arrives. Experiments are carried out using the standard datasets comprising of long feeds (IMDB, Sentiment140) and short feeds (Twitter, WhatsApp, and Twitter airline sentiment) and the proposed method showed positive results in terms of better performance and accuracy.

  • Personalized Food Image Classifier Considering Time-Dependent and Item-Dependent Food Distribution Open Access

    Qing YU  Masashi ANZAWA  Sosuke AMANO  Kiyoharu AIZAWA  

     
    PAPER

      Pubricized:
    2019/06/21
      Vol:
    E102-D No:11
      Page(s):
    2120-2126

    Since the development of food diaries could enable people to develop healthy eating habits, food image recognition is in high demand to reduce the effort in food recording. Previous studies have worked on this challenging domain with datasets having fixed numbers of samples and classes. However, in the real-world setting, it is impossible to include all of the foods in the database because the number of classes of foods is large and increases continually. In addition to that, inter-class similarity and intra-class diversity also bring difficulties to the recognition. In this paper, we solve these problems by using deep convolutional neural network features to build a personalized classifier which incrementally learns the user's data and adapts to the user's eating habit. As a result, we achieved the state-of-the-art accuracy of food image recognition by the personalization of 300 food records per user.

  • Efficient Class-Incremental Learning Based on Bag-of-Sequencelets Model for Activity Recognition

    Jong-Woo LEE  Ki-Sang HONG  

     
    PAPER-Vision

      Vol:
    E102-A No:9
      Page(s):
    1293-1302

    We propose a class-incremental learning framework for human activity recognition based on the Bag-of-Sequencelets model (BoS). The framework updates learned models efficiently without having to relearn them when training data of new classes are added. In this framework, all types of features including hand-crafted features and Convolutional Neural Networks (CNNs) based features and combinations of those features can be used as features for videos. Compared with the original BoS, the new framework can reduce the learning time greatly with little loss of classification accuracy.

  • Incremental Unsupervised-Learning of Appearance Manifold with View-Dependent Covariance Matrix for Face Recognition from Video Sequences

    Lina  Tomokazu TAKAHASHI  Ichiro IDE  Hiroshi MURASE  

     
    PAPER-Pattern Recognition

      Vol:
    E92-D No:4
      Page(s):
    642-652

    We propose an appearance manifold with view-dependent covariance matrix for face recognition from video sequences in two learning frameworks: the supervised-learning and the incremental unsupervised-learning. The advantages of this method are, first, the appearance manifold with view-dependent covariance matrix model is robust to pose changes and is also noise invariant, since the embedded covariance matrices are calculated based on their poses in order to learn the samples' distributions along the manifold. Moreover, the proposed incremental unsupervised-learning framework is more realistic for real-world face recognition applications. It is obvious that it is difficult to collect large amounts of face sequences under complete poses (from left sideview to right sideview) for training. Here, an incremental unsupervised-learning framework allows us to train the system with the available initial sequences, and later update the system's knowledge incrementally every time an unlabelled sequence is input. In addition, we also integrate the appearance manifold with view-dependent covariance matrix model with a pose estimation system for improving the classification accuracy and easily detecting sequences with overlapped poses for merging process in the incremental unsupervised-learning framework. The experimental results showed that, in both frameworks, the proposed appearance manifold with view-dependent covariance matrix method could recognize faces from video sequences accurately.

  • Incremental Leaning and Model Selection for Radial Basis Function Network through Sleep

    Koichiro YAMAUCHI  Jiro HAYAMI  

     
    PAPER-Algorithm Theory

      Vol:
    E90-D No:4
      Page(s):
    722-735

    The model selection for neural networks is an essential procedure to get not only high levels of generalization but also a compact data model. Especially in terms of getting the compact model, neural networks usually outperform other kinds of machine learning methods. Generally, models are selected by trial and error testing using whole learning samples given in advance. In many cases, however, it is difficult and time consuming to prepare whole learning samples in advance. To overcome these inconveniences, we propose a hybrid on-line learning system for a radial basis function (RBF) network that repeats quick learning of novel instances by rote during on-line periods (awake phases) and repeats pseudo rehearsal for model selection during out-of-service periods (sleep phases). We call this system Incremental Learning with Sleep (ILS). During sleep phases, the system basically stops the learning of novel instances, and during awake phases, the system responds quickly. We also extended the system so as to shorten the periodic sleep periods. Experimental results showed the system selects more compact data models than those selected by other machine learning systems.

  • Self-Organizing Map Based on Block Learning

    Akitsugu OHTSUKA  Naotake KAMIURA  Teijiro ISOKAWA  Nobuyuki MATSUI  

     
    PAPER-Nonlinear Problems

      Vol:
    E88-A No:11
      Page(s):
    3151-3160

    A block-matching-based self-organizing map (BMSOM) is presented. Finding a winner is carried out for each block, which is a set of neurons arranged in square. The proposed learning process updates the reference vectors of all of the neurons in a winner block. Then, the degrees of vector modifications are mainly controlled by the size (i.e., the number of neurons) of the winner block. To prevent a single cluster with neurons from splitting into some disjointed clusters, the restriction of the block size is imposed in the beginning of learning. At the main stage, this restriction is canceled. In BMSOM learning, the size of a winner block does not always decrease monotonically. The formula used to update the reference vectors is basically uncontrolled by time. Therefore, even if a map is in a nonstationary environment, training the map is probably pursued without interruption to adjust time-controlled parameters such as learning rate. Numerical results demonstrate that the BMSOM makes it possible to improve the plasticity of maps in a nonstationary environment and incremental learning.

  • Numerical Evaluation of Incremental Vector Quantization Using Stochastic Relaxation

    Noritaka SHIGEI  Hiromi MIYAJIMA  Michiharu MAEDA  

     
    PAPER

      Vol:
    E87-A No:9
      Page(s):
    2364-2371

    Learning algorithms for Vector Quantization (VQ) are categorized into two types: batch learning and incremental learning. Incremental learning is more useful than batch learning, because, unlike batch learning, incremental learning can be performed either on-line or off-line. In this paper, we develop effective incremental learning methods by using Stochastic Relaxation (SR) techniques, which have been developed for batch learning. It has been shown that, for batch learning, the SR techniques can provide good global optimization without greatly increasing the computational cost. We empirically investigates the effective implementation of SR for incremental learning. Specifically, we consider five types of SR methods: ISR1, ISR2, ISR3, WSR1 and WSR2. ISRs and WSRs add noise input and weight vectors, respectively. The difference among them is when the perturbed input or weight vectors are used in learning. These SR methods are applied to three types of incremental learning: K-means, Neural-Gas (NG) and Kohonen's Self-Organizing Mapping (SOM). We evaluate comprehensively these combinations in terms of accuracy and computation time. Our simulation results show that K-means with ISR3 is the most comprehensively effective among these combinations and is superior to the conventional NG method known as an excellent method.

  • Incremental Construction of Projection Generalizing Neural Networks

    Masashi SUGIYAMA  Hidemitsu OGAWA  

     
    PAPER-Biocybernetics, Neurocomputing

      Vol:
    E85-D No:9
      Page(s):
    1433-1442

    In many practical situations in NN learning, training examples tend to be supplied one by one. In such situations, incremental learning seems more natural than batch learning in view of the learning methods of human beings. In this paper, we propose an incremental learning method in neural networks under the projection learning criterion. Although projection learning is a linear learning method, achieving the above goal is not straightforward since it involves redundant expressions of functions with over-complete bases, which is essentially related to pseudo biorthogonal bases (or frames). The proposed method provides exactly the same learning result as that obtained by batch learning. It is theoretically shown that the proposed method is more efficient in computation than batch learning.

  • Trade-Off between Requirement of Learning and Computational Cost

    Tzung-Pei HONG  Ching-Hung WANG  Shian-Shyong TSENG  

     
    PAPER-Artificial Intelligence and Cognitive Science

      Vol:
    E81-D No:6
      Page(s):
    565-571

    Machine learning in real-world situations sometimes starts from an initial collection of training instances; learning then proceeds off and on as new training instances come intermittently. The idea of two-phase learning has then been proposed here for effectively solving the learning problems in which training instances come in this two-stage way. Four two-phase learning algorithms based on the learning method PRISM have also been proposed for inducing rules from training instances. These alternatives form a spectrum, showing achievement of the requirement of PRISM (keeping down the number of irrelevant attributes) heavily dependent on the spent computational cost. The suitable alternative, as a trade-off between computational costs and achievement to the requirements, can then be chosen according to the request of the application domains.

  • Incremental Learning and Generalization Ability of Artificial Neural Network Trained by Fahlman and Lebiere's Learning Algorithm

    Masanori HAMAMOTO  Joarder KAMRUZZAMAN  Yukio KUMAGAI  Hiromitsu HIKITA  

     
    LETTER-Neural Networks

      Vol:
    E76-A No:2
      Page(s):
    242-247

    We apply Fahlman and Lebiere's (FL) algorithm to network synthesis and incremental learning by making use of already-trained networks, each performing a specified task, to design a system that performs a global or extended task without destroying the information gained by the previously trained nets. Investigation shows that the synthesized or expanded FL networks have generalization ability superior to Back propagation (BP) networks in which the number of newly added hidden units must be pre-specified.