The search functionality is under construction.
The search functionality is under construction.

IEICE TRANSACTIONS on Information

  • Impact Factor

    0.59

  • Eigenfactor

    0.002

  • article influence

    0.1

  • Cite Score

    1.4

Advance publication (published online immediately after acceptance)

Volume E93-D No.10  (Publication Date:2010/10/01)

    Special Section on Data Mining and Statistical Science
  • FOREWORD Open Access

    Masashi SUGIYAMA  

     
    FOREWORD

      Page(s):
    2671-2671
  • Cartesian Kernel: An Efficient Alternative to the Pairwise Kernel

    Hisashi KASHIMA  Satoshi OYAMA  Yoshihiro YAMANISHI  Koji TSUDA  

     
    PAPER

      Page(s):
    2672-2679

    Pairwise classification has many applications including network prediction, entity resolution, and collaborative filtering. The pairwise kernel has been proposed for those purposes by several research groups independently, and has been used successfully in several fields. In this paper, we propose an efficient alternative which we call a Cartesian kernel. While the existing pairwise kernel (which we refer to as the Kronecker kernel) can be interpreted as the weighted adjacency matrix of the Kronecker product graph of two graphs, the Cartesian kernel can be interpreted as that of the Cartesian graph, which is more sparse than the Kronecker product graph. We discuss the generalization bounds of the two pairwise kernels by using eigenvalue analysis of the kernel matrices. Also, we consider the N-wise extensions of the two pairwise kernels. Experimental results show the Cartesian kernel is much faster than the Kronecker kernel, and at the same time, competitive with the Kronecker kernel in predictive performance.

  • Gaussian Process Regression with Measurement Error

    Yukito IBA  Shotaro AKAHO  

     
    PAPER

      Page(s):
    2680-2689

    Regression analysis that incorporates measurement errors in input variables is important in various applications. In this study, we consider this problem within a framework of Gaussian process regression. The proposed method can also be regarded as a generalization of kernel regression to include errors in regressors. A Markov chain Monte Carlo method is introduced, where the infinite-dimensionality of Gaussian process is dealt with a trick to exchange the order of sampling of the latent variable and the function. The proposed method is tested with artificial data.

  • Superfast-Trainable Multi-Class Probabilistic Classifier by Least-Squares Posterior Fitting

    Masashi SUGIYAMA  

     
    PAPER

      Page(s):
    2690-2701

    Kernel logistic regression (KLR) is a powerful and flexible classification algorithm, which possesses an ability to provide the confidence of class prediction. However, its training--typically carried out by (quasi-)Newton methods--is rather time-consuming. In this paper, we propose an alternative probabilistic classification algorithm called Least-Squares Probabilistic Classifier (LSPC). KLR models the class-posterior probability by the log-linear combination of kernel functions and its parameters are learned by (regularized) maximum likelihood. In contrast, LSPC employs the linear combination of kernel functions and its parameters are learned by regularized least-squares fitting of the true class-posterior probability. Thanks to this linear regularized least-squares formulation, the solution of LSPC can be computed analytically just by solving a regularized system of linear equations in a class-wise manner. Thus LSPC is computationally very efficient and numerically stable. Through experiments, we show that the computation time of LSPC is faster than that of KLR by two orders of magnitude, with comparable classification accuracy.

  • Privacy Preserving Frequency Mining in 2-Part Fully Distributed Setting

    The Dung LUONG  Tu Bao HO  

     
    PAPER

      Page(s):
    2702-2708

    Recently, privacy preservation has become one of the key issues in data mining. In many data mining applications, computing frequencies of values or tuples of values in a data set is a fundamental operation repeatedly used. Within the context of privacy preserving data mining, several privacy preserving frequency mining solutions have been proposed. These solutions are crucial steps in many privacy preserving data mining tasks. Each solution was provided for a particular distributed data scenario. In this paper, we consider privacy preserving frequency mining in a so-called 2-part fully distributed setting. In this scenario, the dataset is distributed across a large number of users in which each record is owned by two different users, one user only knows the values for a subset of attributes, while the other knows the values for the remaining attributes. A miner aims to compute the frequencies of values or tuples of values while preserving each user's privacy. Some solutions based on randomization techniques can address this problem, but suffer from the tradeoff between privacy and accuracy. We develop a cryptographic protocol for privacy preserving frequency mining, which ensures each user's privacy without loss of accuracy. The experimental results show that our protocol is efficient as well.

  • Algorithm for Computing Convex Skyline Objectsets on Numerical Databases

    Md. Anisuzzaman SIDDIQUE  Yasuhiko MORIMOTO  

     
    PAPER

      Page(s):
    2709-2716

    Given a set of objects, a skyline query finds the objects that are not dominated by others. We consider a skyline query for sets of objects in a database in this paper. Let s be the number of objects in each set and n be the number of objects in the database. The number of sets in the database amounts to nCs. We propose an efficient algorithm to compute convex skyline of the nCs sets. We call the retrieve skyline objectsets as "convex skyline objectsets". Experimental evaluation using real and synthetic datasets demonstrates that the proposed skyline objectset query is meaningful and is scalable enough to handle large and high dimensional databases. Recently, we have to aware individual's privacy. Sometimes, we have to hide individual values and are only allowed to disclose aggregated values of objects. In such situation, we cannot use conventional skyline queries. The proposed function can be a promising alternative in decision making in a privacy aware environment.

  • Extracting Know-Who/Know-How Using Development Project-Related Taxonomies

    Makoto NAKATSUJI  Akimichi TANAKA  Takahiro MADOKORO  Kenichiro OKAMOTO  Sumio MIYAZAKI  Tadasu UCHIYAMA  

     
    PAPER

      Page(s):
    2717-2727

    Product developers frequently discuss topics related to their development project with others, but often use technical terms whose meanings are not clear to non-specialists. To provide non-experts with precise and comprehensive understanding of the know-who/know-how being discussed, the method proposed herein categorizes the messages using a taxonomy of the products being developed and a taxonomy of tasks relevant to those products. The instances in the taxonomy are products and/or tasks manually selected as relevant to system development. The concepts are defined by the taxonomy of instances. That proposed method first extracts phrases from discussion logs as data-driven instances relevant to system development. It then classifies those phrases to the concepts defined by taxonomy experts. The innovative feature of our method is that in classifying a phrase to a concept, say C, the method considers the associations of the phrase with not only the instances of C, but also with the instances of the neighbor concepts of C (neighbor is defined by the taxonomy). This approach is quite accurate in classifying phrases to concepts; the phrase is classified to C, not the neighbors of C, even though they are quite similar to C. Next, we attach a data-driven concept to C; the data-driven concept includes instances in C and a classified phrase as a data-driven instance. We analyze know-who and know-how by using not only human-defined concepts but also those data-driven concepts. We evaluate our method using the mailing-list of an actual project. It could classify phrases with twice the accuracy possible with the TF/iDF method, which does not consider the neighboring concepts. The taxonomy with data-driven concepts provides more detailed know-who/know-how than can be obtained from just the human-defined concepts themselves or from the data-driven concepts as determined by the TF/iDF method.

  • Accurate Human Detection by Appearance and Motion

    Shaopeng TANG  Satoshi GOTO  

     
    PAPER

      Page(s):
    2728-2736

    In this paper, a human detection method is developed. An appearance based detector and a motion based detector are proposed respectively. A multi scale block histogram of template feature (MB-HOT) is used to detect human by the appearance. It integrates the gray value information and the gradient value information, and represents the relationship of three blocks. Experiment on INRIA dataset shows that this feature is more discriminative than other features, such as histogram of orientation gradient (HOG). A motion based feature is also proposed to capture the relative motion of human body. This feature is calculated in optical flow domain and experimental result in our dataset shows that this feature outperforms other motion based features. The detection responses obtained by two features are combined to reduce the false detection. Graphic process unit (GPU) based implementation is proposed to accelerate the calculation of two features, and make it suitable for real time applications.

  • Regular Section
  • Static Task Scheduling Algorithms Based on Greedy Heuristics for Battery-Powered DVS Systems

    Tetsuo YOKOYAMA  Gang ZENG  Hiroyuki TOMIYAMA  Hiroaki TAKADA  

     
    PAPER-Software System

      Page(s):
    2737-2746

    The principles for good design of battery-aware voltage scheduling algorithms for both aperiodic and periodic task sets on dynamic voltage scaling (DVS) systems are presented. The proposed algorithms are based on greedy heuristics suggested by several battery characteristics and Lagrange multipliers. To construct the proposed algorithms, we use the battery characteristics in the early stage of scheduling more properly. As a consequence, the proposed algorithms show superior results on synthetic examples of periodic and aperiodic tasks from the task sets which are excerpted from the comparative work, on uni- and multi-processor platforms, respectively. In particular, for some large task sets, the proposed algorithms enable previously unschedulable task sets due to battery exhaustion to be schedulable.

  • Efficient Distributed Web Crawling Utilizing Internet Resources

    Xiao XU  Weizhe ZHANG  Hongli ZHANG  Binxing FANG  

     
    PAPER-Data Engineering, Web Information Systems

      Page(s):
    2747-2762

    Internet computing is proposed to exploit personal computing resources across the Internet in order to build large-scale Web applications at lower cost. In this paper, a DHT-based distributed Web crawling model based on the concept of Internet computing is proposed. Also, we propose two optimizations to reduce the download time and waiting time of the Web crawling tasks in order to increase the system's throughput and update rate. Based on our contributor-friendly download scheme, the improvement on the download time is achieved by shortening the crawler-crawlee RTTs. In order to accurately estimate the RTTs, a network coordinate system is combined with the underlying DHT. The improvement on the waiting time is achieved by redirecting the incoming crawling tasks to light-loaded crawlers in order to keep the queue on each crawler equally sized. We also propose a simple Web site partition method to split a large Web site into smaller pieces in order to reduce the task granularity. All the methods proposed are evaluated through real Internet tests and simulations showing satisfactory results.

  • A Priority Routing Protocol Based on Location and Moving Direction in Delay Tolerant Networks

    Jian SHEN  Sangman MOH  Ilyong CHUNG  

     
    PAPER-Information Network

      Page(s):
    2763-2775

    Delay Tolerant Networks (DTNs) are a class of emerging networks that experience frequent and long-duration partitions. Delay is inevitable in DTNs, so ensuring the validity and reliability of the message transmission and making better use of buffer space are more important than concentrating on how to decrease the delay. In this paper, we present a novel routing protocol named Location and Direction Aware Priority Routing (LDPR) for DTNs, which utilizes the location and moving direction of nodes to deliver a message from source to destination. A node can get its location and moving direction information by receiving beacon packets periodically from anchor nodes and referring to received signal strength indicator (RSSI) for the beacon. LDPR contains two schemes named transmission scheme and drop scheme, which take advantage of the nodes' information of the location and moving direction to transmit the message and store the message into buffer space, respectively. Each message, in addition, is branded a certain priority according to the message's attributes (e.g. importance, validity, security and so on). The message priority decides the transmission order when delivering the message and the dropping sequence when the buffer is full. Simulation results show that the proposed LDPR protocol outperforms epidemic routing (EPI) protocol, prioritized epidemic routing (PREP) protocol, and DTN hierarchical routing (DHR) protocol in terms of packet delivery ratio, normalized routing overhead and average end-to-end delay. It is worth noting that LDPR doesn't need infinite buffer size to ensure the packet delivery ratio as in EPI. In particular, even though the buffer size is only 50, the packet delivery ratio of LDPR can still reach 93.9%, which can satisfy general communication demand. We expect LDPR to be of greater value than other existing solutions in highly disconnected and mobile networks.

  • A Practical Threshold Test Generation for Error Tolerant Application

    Hideyuki ICHIHARA  Kenta SUTOH  Yuki YOSHIKAWA  Tomoo INOUE  

     
    PAPER-Information Network

      Page(s):
    2776-2782

    Threshold testing, which is an LSI testing method based on the acceptability of faults, is effective in yield enhancement of LSIs and selective hardening for LSI systems. In this paper, we propose test generation models for threshold test generation. Using the proposed models, we can efficiently identify acceptable faults and generate test patterns for unacceptable faults with a general test generation algorithm, i.e., without a test generation algorithm specialized for threshold testing. Experimental results show that our approach is, in practice, effective.

  • A C-Testable 4-2 Adder Tree for an Easily Testable High-Speed Multiplier

    Nobutaka KITO  Kensuke HANAI  Naofumi TAKAGI  

     
    PAPER-Information Network

      Page(s):
    2783-2791

    A C-testable 4-2 adder tree for an easily testable high-speed multiplier is proposed, and a recursive method for test generation is shown. By using the specific patterns that we call 'alternately inverted patterns,' the adder tree, as well as partial product generators, can be tested with 14 patterns regardless of its operand size under the cell fault model. The test patterns are easily fed through the partial product generators. The hardware overhead of the 4-2 adder tree with partial product generators for a 64-bit multiplier is about 15%. By using a previously proposed easily testable adder as the final adder, we can obtain an easily testable high-speed multiplier.

  • GTRACE: Mining Frequent Subsequences from Graph Sequences

    Akihiro INOKUCHI  Takashi WASHIO  

     
    PAPER-Artificial Intelligence, Data Mining

      Page(s):
    2792-2804

    In recent years, the mining of a complete set of frequent subgraphs from labeled graph data has been studied extensively. However, to the best of our knowledge, no method has been proposed for finding frequent subsequences of graphs from a set of graph sequences. In this paper, we define a novel class of graph subsequences by introducing axiomatic rules for graph transformations, their admissibility constraints, and a union graph. Then we propose an efficient approach named "GTRACE" for enumerating frequent transformation subsequences (FTSs) of graphs from a given set of graph sequences. The fundamental performance of the proposed method is evaluated using artificial datasets, and its practicality is confirmed by experiments using real-world datasets.

  • Visual Knowledge Structure Reasoning with Intelligent Topic Map

    Huimin LU  Boqin FENG  Xi CHEN  

     
    PAPER-Artificial Intelligence, Data Mining

      Page(s):
    2805-2812

    This paper presents a visual knowledge structure reasoning method using Intelligent Topic Map which extends the conventional Topic Map in structure and enhances its reasoning functions. Visual knowledge structure reasoning method integrates two types of knowledge reasoning: the knowledge logical relation reasoning and the knowledge structure reasoning. The knowledge logical relation reasoning implements knowledge consistency checking and the implicit associations reasoning between knowledge points. We propose a Knowledge Unit Circle Search strategy for the knowledge structure reasoning. It implements the semantic implication extension, the semantic relevant extension and the semantic class belonging confirmation. Moreover, the knowledge structure reasoning results are visualized using ITM Toolkit. A prototype system of visual knowledge structure reasoning has been implemented and applied to the massive knowledge organization, management and service for education.

  • A Hybrid Speech Emotion Recognition System Based on Spectral and Prosodic Features

    Yu ZHOU  Junfeng LI  Yanqing SUN  Jianping ZHANG  Yonghong YAN  Masato AKAGI  

     
    PAPER-Human-computer Interaction

      Page(s):
    2813-2821

    In this paper, we present a hybrid speech emotion recognition system exploiting both spectral and prosodic features in speech. For capturing the emotional information in the spectral domain, we propose a new spectral feature extraction method by applying a novel non-uniform subband processing, instead of the mel-frequency subbands used in Mel-Frequency Cepstral Coefficients (MFCC). For prosodic features, a set of features that are closely correlated with speech emotional states are selected. In the proposed hybrid emotion recognition system, due to the inherently different characteristics of these two kinds of features (e.g., data size), the newly extracted spectral features are modeled by Gaussian Mixture Model (GMM) and the selected prosodic features are modeled by Support Vector Machine (SVM). The final result of the proposed emotion recognition system is obtained by combining the results from these two subsystems. Experimental results show that (1) the proposed non-uniform spectral features are more effective than the traditional MFCC features for emotion recognition; (2) the proposed hybrid emotion recognition system using both spectral and prosodic features yields the relative recognition error reduction rate of 17.0% over the traditional recognition systems using only the spectral features, and 62.3% over those using only the prosodic features.

  • Phase Portrait Analysis for Multiresolution Generalized Gradient Vector Flow

    Sirikan CHUCHERD  Annupan RODTOOK  Stanislav S. MAKHANOV  

     
    PAPER-Image Processing and Video Processing

      Page(s):
    2822-2835

    We propose a modification of the generalized gradient vector flow field techniques based on multiresolution analysis and phase portrait techniques. The original image is subjected to mutliresolutional analysis to create a sequence of approximation and detail images. The approximations are converted into an edge map and subsequently into a gradient field subjected to the generalized gradient vector flow transformation. The procedure removes noise and extends large gradients. At every iteration the algorithm obtains a new, improved vector field being filtered using the phase portrait analysis. The phase portrait is applied to a window with a variable size to find possible boundary points and the noise. As opposed to previous phase portrait techniques based on binary rules our method generates a continuous adjustable score. The score is a function of the eigenvalues of the corresponding linearized system of ordinary differential equations. The salient feature of the method is continuity: when the score is high it is likely to be the noisy part of the image, but when the score is low it is likely to be the boundary of the object. The score is used by a filter applied to the original image. In the neighbourhood of the points with a high score the gray level is smoothed whereas at the boundary points the gray level is increased. Next, a new gradient field is generated and the result is incorporated into the iterative gradient vector flow iterations. This approach combined with multiresolutional analysis leads to robust segmentations with an impressive improvement of the accuracy. Our numerical experiments with synthetic and real medical ultrasound images show that the proposed technique outperforms the conventional gradient vector flow method even when the filters and the multiresolution are applied in the same fashion. Finally, we show that the proposed algorithm allows the initial contour to be much farther from the actual boundary than possible with the conventional methods.

  • Optimization without Minimization Search: Constraint Satisfaction by Orthogonal Projection with Applications to Multiview Triangulation

    Kenichi KANATANI  Yasuyuki SUGAYA  Hirotaka NIITSUMA  

     
    PAPER-Image Recognition, Computer Vision

      Page(s):
    2836-2845

    We present an alternative approach to what we call the "standard optimization", which minimizes a cost function by searching a parameter space. Instead, our approach "projects" in the joint observation space onto the manifold defined by the "consistency constraint", which demands that any minimal subset of observations produce the same result. This approach avoids many difficulties encountered in the standard optimization. As typical examples, we apply it to line fitting and multiview triangulation. The latter produces a new algorithm far more efficient than existing methods. We also discuss the optimality of our approach.

  • Direct Importance Estimation with a Mixture of Probabilistic Principal Component Analyzers

    Makoto YAMADA  Masashi SUGIYAMA  Gordon WICHERN  Jaak SIMM  

     
    LETTER-Fundamentals of Information Systems

      Page(s):
    2846-2849

    Estimating the ratio of two probability density functions (a.k.a. the importance) has recently gathered a great deal of attention since importance estimators can be used for solving various machine learning and data mining problems. In this paper, we propose a new importance estimation method using a mixture of probabilistic principal component analyzers. The proposed method is more flexible than existing approaches, and is expected to work well when the target importance function is correlated and rank-deficient. Through experiments, we illustrate the validity of the proposed approach.

  • The Time Complexity of Hsu and Huang's Self-Stabilizing Maximal Matching Algorithm

    Masahiro KIMOTO  Tatsuhiro TSUCHIYA  Tohru KIKUNO  

     
    LETTER-Fundamentals of Information Systems

      Page(s):
    2850-2853

    The exact time complexity of Hsu and Huan's self-stabilizing maximal matching algorithm is provided. It is n2 + n - 2 if the number of nodes n is even and n2 + n - if n is odd.

  • Energy Efficient Skyline Query Processing in Wireless Sensor Networks

    Dongook SEONG  Junho PARK  Myungho YEO  Jaesoo YOO  

     
    LETTER-Data Engineering, Web Information Systems

      Page(s):
    2854-2857

    In sensor networks, many studies have been proposed to process in-network aggregation efficiently. Unlike general aggregation queries, skyline query processing compares multi-dimensional data for the result. Therefore, it is very difficult to process the skyline queries in sensor networks. It is important to filter unnecessary data for energy-efficient skyline query processing. Existing approaches get rid of unnecessary data transmission by deploying filters to whole sensors. However, network lifetime is reduced due to energy consumption for transmitting filters. In this paper, we propose a lazy filtering-based in-network skyline query processing algorithm to reduce energy consumption by transmitting filters. Our algorithm creates the skyline filter table (SFT) in the data gathering process which sends data from sensor nodes to the base station and filters out unnecessary data transmissions using it. The experimental results show that our algorithm reduces false positive by 53% and improves network lifetime by 44% on average over the existing method.

  • The Design of a Total Ship Service Framework Based on a Ship Area Network

    Daekeun MOON  Kwangil LEE  Hagbae KIM  

     
    LETTER-Dependable Computing

      Page(s):
    2858-2861

    The rapid growth of IT technology has enabled ship navigation and automation systems to gain better functionality and safety. However, they generally have their own proprietary structures and networks, which makes interfacing with and remote access to them difficult. In this paper, we propose a total ship service framework that includes a ship area network to integrate separate system networks with heterogeneity and dynamicity, and a ship-shore communication infrastructure to support a remote monitoring and maintenance service using satellite communications. Finally, we present some ship service systems to demonstrate the applicability of the proposed framework.

  • Calibrating Coordinates of a Tabletop Display with a Reflex in Eye-Hand Coordination

    Makio ISHIHARA  Yukio ISHIHARA  

     
    LETTER-Human-computer Interaction

      Page(s):
    2862-2865

    This manuscript introduces a pointing interface for a tabletop display with a reflex in eye-hand coordination. The reflex is a natural response to inconsistency between kinetic information of a mouse and visual feedback of the mouse cursor. The reflex yields information on which side the user sees the screen from, so that the screen coordinates are aligned with the user's position.

  • Image Contrast Enhancement by Global and Local Adjustment of Gray Levels

    Na DUAN  Soon Hak KWON  

     
    LETTER-Image Processing and Video Processing

      Page(s):
    2866-2869

    Various contrast enhancement methods such as histogram equalization (HE) and local contrast enhancement (LCE) have been developed to increase the visibility and details of a degraded image. We propose an image contrast enhancement method based on the global and local adjustment of gray levels by combining HE with LCE methods. For the optimal combination of both, we introduce a discrete entropy. Evaluation of our experimental results shows that the proposed method outperforms both the HE and LCE methods.

  • Extraction of Combined Features from Global/Local Statistics of Visual Words Using Relevant Operations

    Tetsu MATSUKAWA  Takio KURITA  

     
    LETTER-Image Recognition, Computer Vision

      Page(s):
    2870-2874

    This paper presents a combined feature extraction method to improve the performance of bag-of-features image classification. We apply 10 relevant operations to global/local statistics of visual words. Because the pairwise combination of visual words is large, we apply feature selection methods including fisher discriminant criterion and L1-SVM. The effectiveness of the proposed method is confirmed through the experiment.

  • A Semi-Supervised Approach to Perceived Age Prediction from Face Images

    Kazuya UEKI  Masashi SUGIYAMA  Yasuyuki IHARA  

     
    LETTER-Image Recognition, Computer Vision

      Page(s):
    2875-2878

    We address the problem of perceived age estimation from face images, and propose a new semi-supervised approach involving two novel aspects. The first novelty is an efficient active learning strategy for reducing the cost of labeling face samples. Given a large number of unlabeled face samples, we reveal the cluster structure of the data and propose to label cluster-representative samples for covering as many clusters as possible. This simple sampling strategy allows us to boost the performance of a manifold-based semi-supervised learning method only with a relatively small number of labeled samples. The second contribution is to take the heterogeneous characteristics of human age perception into account. It is rare to misjudge the age of a 5-year-old child as 15 years old, but the age of a 35-year-old person is often misjudged as 45 years old. Thus, magnitude of the error is different depending on subjects' age. We carried out a large-scale questionnaire survey for quantifying human age perception characteristics, and propose to utilize the quantified characteristics in the framework of weighted regression. Consequently, our proposed method is expressed in the form of weighted least-squares with a manifold regularizer, which is scalable to massive datasets. Through real-world age estimation experiments, we demonstrate the usefulness of the proposed method.

  • Improving Proximity and Diversity in Multiobjective Evolutionary Algorithms

    Chang Wook AHN  Yehoon KIM  

     
    LETTER-Biocybernetics, Neurocomputing

      Page(s):
    2879-2882

    This paper presents an approach for improving proximity and diversity in multiobjective evolutionary algorithms (MOEAs). The idea is to discover new nondominated solutions in the promising area of search space. It can be achieved by applying mutation only to the most converged and the least crowded individuals. In other words, the proximity and diversity can be improved because new nondominated solutions are found in the vicinity of the individuals highly converged and less crowded. Empirical results on multiobjective knapsack problems (MKPs) demonstrate that the proposed approach discovers a set of nondominated solutions much closer to the global Pareto front while maintaining a better distribution of the solutions.

  • Design of Sigmoid Activation Functions for Fuzzy Cognitive Maps via Lyapunov Stability Analysis

    In Keun LEE  Soon Hak KWON  

     
    LETTER-Biocybernetics, Neurocomputing

      Page(s):
    2883-2886

    Fuzzy cognitive maps (FCMs) are used to support decision-making, and the decision processes are performed by inference of FCMs. The inference greatly depends on activation functions such as sigmoid function, hyperbolic tangent function, step function, and threshold linear function. However, the sigmoid functions widely used for decision-making processes have been designed by experts. Therefore, we propose a method for designing sigmoid functions through Lyapunov stability analysis. We show the usefulness of the proposed method through the experimental results in inference of FCMs using the designed sigmoid functions.