The search functionality is under construction.
The search functionality is under construction.

IEICE TRANSACTIONS on Information

  • Impact Factor

    0.59

  • Eigenfactor

    0.002

  • article influence

    0.1

  • Cite Score

    1.4

Advance publication (published online immediately after acceptance)

Volume E91-D No.2  (Publication Date:2008/02/01)

    Special Section on Foundations of Computer Science
  • FOREWORD Open Access

    Chuzo IWAMOTO  

     
    FOREWORD

      Page(s):
    161-161
  • Inferring Pedigree Graphs from Genetic Distances

    Takeyuki TAMURA  Hiro ITO  

     
    PAPER-Graph Algorithms

      Page(s):
    162-169

    In this paper, we study a problem of inferring blood relationships which satisfy a given matrix of genetic distances between all pairs of n nodes. Blood relationships are represented by our proposed graph class, which is called a pedigree graph. A pedigree graph is a directed acyclic graph in which the maximum indegree is at most two. We show that the number of pedigree graphs which satisfy the condition of given genetic distances may be exponential, but they can be represented by one directed acyclic graph with n nodes. Moreover, an O(n3) time algorithm which solves the problem is also given. Although phylogenetic trees and phylogenetic networks are similar data structures to pedigree graphs, it seems that inferring methods for phylogenetic trees and networks cannot be applied to infer pedigree graphs since nodes of phylogenetic trees and networks represent species whereas nodes of pedigree graphs represent individuals. We also show an O(n2) time algorithm which detects a contradiction between a given pedigree graph and distance matrix of genetic distances.

  • Longest Path Problems on Ptolemaic Graphs

    Yoshihiro TAKAHARA  Sachio TERAMOTO  Ryuhei UEHARA  

     
    PAPER-Graph Algorithms

      Page(s):
    170-177

    Longest path problem is a problem for finding a longest path in a given graph. While the graph classes in which the Hamiltonian path problem can be solved efficiently are widely investigated, there are few known graph classes such that the longest path problem can be solved efficiently. Polynomial time algorithms for finding a longest cycle and a longest path in a Ptolemaic graph are proposed. Ptolemaic graphs are the graphs that satisfy the Ptolemy inequality, and they are the intersection of chordal graphs and distance-hereditary graphs. The algorithms use the dynamic programming technique on a laminar structure of cliques, which is a recent characterization of Ptolemaic graphs.

  • Inequalities on the Number of Connected Spanning Subgraphs in a Multigraph

    Peng CHENG  Shigeru MASUYAMA  

     
    PAPER-Graphs and Networks

      Page(s):
    178-186

    Consider an undirected multigraph G=(V,E) with n vertices and m edges, and let Ni denote the number of connected spanning subgraphs with i(min) edges in G. Recently, we showed in [3] the validity of (m-i+1)Ni-1>Ni for a simple graph and each i(min). Note that, from this inequality, 2 is easily derived. In this paper, for a multigraph G and all i(min), we prove (m-i+1)Ni-1(i-n+2)Ni, and give a necessary and sufficient condition by which (m-i+1)Ni-1=(i-n+2)Ni. In particular, this means that (m-i+1)Ni-1>Ni is not valid for all multigraphs, in general. Furthermore, we prove 2, which is not straightforwardly derived from (m-i+1)Ni-1(i-n+2)Ni, and also introduce a necessary and sufficent condition by which =2. Moreover, we show a sufficient condition for a multigraph to have Nn2>Nn-1Nn+1. As special cases of the sufficient condition, we show that if G contains at least +1 multiple edges between some pair of vertices, or if its underlying simple graph has no cycle with length more than 4, then Nn2>Nn-1Nn+1.

  • Improved Approximation Algorithms for Item Pricing with Bounded Degree and Valuation

    Ryoso HAMANE  Toshiya ITOH  

     
    PAPER-Approximation Algorithms

      Page(s):
    187-199

    When a store sells items to customers, the store wishes to decide the prices of the items to maximize its profit. If the store sells the items with low (resp. high) prices, the customers buy more (resp. less) items, which provides less profit to the store. It would be hard for the store to decide the prices of items. Assume that a store has a set V of n items and there is a set C of m customers who wish to buy those items. The goal of the store is to decide the price of each item to maximize its profit. We refer to this maximization problem as an item pricing problem. We classify the item pricing problems according to how many items the store can sell or how the customers valuate the items. If the store can sell every item i with unlimited (resp. limited) amount, we refer to this as unlimited supply (resp. limited supply). We say that the item pricing problem is single-minded if each customer jC wishes to buy a set ejV of items and assigns valuation w(ej) ≥ 0. For the single-minded item pricing problems (in unlimited supply), Balcan and Blum regarded them as weighted k-hypergraphs and gave several approximation algorithms. In this paper, we focus on the (pseudo) degree of k-hypergraphs and the valuation ratio, i.e., the ratio between the smallest and the largest valuations. Then for the single-minded item pricing problems (in unlimited supply), we show improved approximation algorithms (for k-hypergraphs, general graphs, bipartite graphs, etc.) with respect to the maximum (pseudo) degree and the valuation ratio.

  • Improvements of HITS Algorithms for Spam Links

    Yasuhito ASANO  Yu TEZUKA  Takao NISHIZEKI  

     
    PAPER-Scoring Algorithms

      Page(s):
    200-208

    The HITS algorithm proposed by Kleinberg is one of the representative methods of scoring Web pages by using hyperlinks. In the days when the algorithm was proposed, most of the pages given high score by the algorithm were really related to a given topic, and hence the algorithm could be used to find related pages. However, the algorithm and the variants including Bharat's improved HITS, abbreviated to BHITS, proposed by Bharat and Henzinger cannot be used to find related pages any more on today's Web, due to an increase of spam links. In this paper, we first propose three methods to find "linkfarms," that is, sets of spam links forming a densely connected subgraph of a Web graph. We then present an algorithm, called a trust-score algorithm, to give high scores to pages which are not spam pages with a high probability. Combining the three methods and the trust-score algorithm with BHITS, we obtain several variants of the HITS algorithm. We ascertain by experiments that one of them, named TaN+BHITS using the trust-score algorithm and the method of finding linkfarms by employing name servers, is most suitable for finding related pages on today's Web. Our algorithms take time and memory no more than those required by the original HITS algorithm, and can be executed on a PC with a small amount of main memory.

  • On the Generative Power of Multiple Context-Free Grammars and Macro Grammars

    Hiroyuki SEKI  Yuki KATO  

     
    PAPER-Formal Language Theory

      Page(s):
    209-221

    Several grammars of which generative power is between context-free grammar and context-sensitive grammar were proposed. Among them are macro grammar and tree adjoining grammar. Multiple context-free grammar is also a natural extension of context-free grammars, and is known to be stronger in its generative power than tree adjoining grammar and yet to be recognizable in polynomial time. In this paper, the generative power of several subclasses of variable-linear macro grammars and that of multiple context-free grammars are compared in details.

  • Learning of Finite Unions of Tree Patterns with Internal Structured Variables from Queries

    Satoshi MATSUMOTO  Takayoshi SHOUDAI  Tomoyuki UCHIDA  Tetsuhiro MIYAHARA  Yusuke SUZUKI  

     
    PAPER-Algorithmic Learning Theory

      Page(s):
    222-230

    A linear term tree is defined as an edge-labeled rooted tree pattern with ordered children and internal structured variables whose labels are mutually distinct. A variable can be replaced with arbitrary edge-labeled rooted ordered trees. We consider the polynomial time learnability of finite unions of linear term trees in the exact learning model formalized by Angluin. The language L(t) of a linear term tree t is the set of all trees obtained from t by substituting arbitrary edge-labeled rooted ordered trees for all variables in t. Moreover, for a finite set S of linear term trees, we define L(S)=∪tS L(t). A target of learning, denoted by T*, is a finite set of linear term trees, where the number of edge labels is infinite. In this paper, for any set T* of m linear term trees (m ≥ 0), we present a query learning algorithm which exactly identifies T* in polynomial time using at most 2mn2 Restricted Subset queries and at most m+1 Equivalence queries, where n is the maximum size of counterexamples. Finally, we note that finite sets of linear term trees are not learnable in polynomial time using Restricted Equivalence, Membership and Subset queries.

  • Survey Propagation as "Probabilistic Token Passing"

    Ronghui TU  Yongyi MAO  Jiying ZHAO  

     
    LETTER-Algorithm Theory

      Page(s):
    231-233

    In this paper, we present a clean and simple formulation of survey propagation (SP) for constraint-satisfaction problems as "probabilistic token passing". The result shows the importance of extending variable alphabets to their power sets in designing SP algorithms.

  • Regular Section
  • Detection of Displacement Vectors through Edge Segment Detection

    Haiyang YU  Seizaburo NIITSUMA  

     
    PAPER-Computation and Computational Models

      Page(s):
    234-242

    The research on displacement vector detection has gained increasing attention in recent years. However, no relationship between displacement vectors and the outlines of objects in motion has been established. We describe a new method of detecting displacement vectors through edge segment detection by emphasizing the correlation between displacement vectors and their outlines. Specifically, after detecting an edge segment, the direction of motion of the edge segment can be inferred through the variation in the values of the Laplacian-Gaussian filter at the position near the edge segment before and after the motion. Then, by observing the degrees of displacement before and after the motion, the displacement vector can be calculated. The accuracy compared to other methods of displacement vector detection demonstrates the feasibility of this method.

  • The Optimization of In-Memory Space Partitioning Trees for Cache Utilization

    Myung Ho YEO  Young Soo MIN  Kyoung Soo BOK  Jae Soo YOO  

     
    PAPER-Database

      Page(s):
    243-250

    In this paper, a novel cache conscious indexing technique based on space partitioning trees is proposed. Many researchers investigated efficient cache conscious indexing techniques which improve retrieval performance of in-memory database management system recently. However, most studies considered data partitioning and targeted fast information retrieval. Existing data partitioning-based index structures significantly degrade performance due to the redundant accesses of overlapped spaces. Specially, R-tree-based index structures suffer from the propagation of MBR (Minimum Bounding Rectangle) information by updating data frequently. In this paper, we propose an in-memory space partitioning index structure for optimal cache utilization. The proposed index structure is compared with the existing index structures in terms of update performance, insertion performance and cache-utilization rate in a variety of environments. The results demonstrate that the proposed index structure offers better performance than existing index structures.

  • Accelerating Web Content Filtering by the Early Decision Algorithm

    Po-Ching LIN  Ming-Dao LIU  Ying-Dar LIN  Yuan-Cheng LAI  

     
    PAPER-Contents Technology and Web Information Systems

      Page(s):
    251-257

    Real-time content analysis is typically a bottleneck in Web filtering. To accelerate the filtering process, this work presents a simple, but effective early decision algorithm that analyzes only part of the Web content. This algorithm can make the filtering decision, either to block or to pass the Web content, as soon as it is confident with a high probability that the content really belongs to a banned or an allowed category. Experiments show the algorithm needs to examine only around one-fourth of the Web content on average, while the accuracy remains fairly good: 89% for the banned content and 93% for the allowed content. This algorithm can complement other Web filtering approaches, such as URL blocking, to filter the Web content with high accuracy and efficiency. Text classification algorithms in other applications can also follow the principle of early decision to accelerate their applications.

  • Distributed Multiple Access Control for the Wireless Mesh Personal Area Networks

    Moo Sung PARK  Byungjoo LEE  Seung Hyong RHEE  

     
    PAPER-Networks

      Page(s):
    258-263

    Mesh networking technologies for both high-rate and low-rate wireless personal area networks (WPANs) are under development by several standardization bodies. They are considering to adopt distributed TDMA MAC protocols to provide seamless user mobility as well as a good peer-to-peer QoS in WPAN mesh. It has been, however, pointed out that the absence of a central controller in the wireless TDMA MAC may cause a severe performance degradation: e.g., fair allocation, service differentiation, and admission control may be hard to achieve or can not be provided. In this paper, we suggest a new framework of resource allocation for the distributed MAC protocols in WPANs. Simulation results show that our algorithm achieves both a fair resource allocation and flexible service differentiations in a fully distributed way for mesh WPANs where the devices have high mobility and various requirements. We also provide an analytical modeling to discuss about its unique equilibrium and to compute the lengths of reserved time slots at the stable point.

  • Filtering False Positives Based on Server-Side Behaviors

    Makoto SHIMAMURA  Miyuki HANAOKA  Kenji KONO  

     
    PAPER-Application Information Security

      Page(s):
    264-276

    Reducing the rate of false positives is of vital importance in enhancing the usefulness of signature-based network intrusion detection systems (NIDSs). To reduce the number of false positives, a network administrator must thoroughly investigate a lengthy list of signatures and carefully disable the ones that detect attacks that are not harmful to the administrator's environment. This is a daunting task; if some signatures are disabled by mistake, the NIDS fails to detect critical remote attacks. We designed a NIDS, TrueAlarm, to reduce the rate of false positives. Conventional NIDSs alert administrators that a malicious message has been detected, regardless of whether the message actually attempts to compromise the protected server. In contrast, TrueAlarm delays the alert until it has confirmed that an attempt has been made. The TrueAlarm NIDS cooperates with a server-side monitor that observes the protected server's behavior. TrueAlarm only alerts administrators when a server-side monitor has detected deviant server behavior that must have been caused by a message detected by a NIDS. Our experimental results revealed that TrueAlarm reduces the rate of false positives. Using actual network traffic collected over 14 days, TrueAlarm produced 46 false positives, while Snort, a conventional NIDS, produced 818.

  • Fuzzy Rule Extraction from Dynamic Data for Voltage Risk Identification

    Chen-Sung CHANG  

     
    PAPER-Artificial Intelligence and Cognitive Science

      Page(s):
    277-285

    This paper presents a methodology for performing on-line voltage risk identification (VRI) in power supply networks using hyperrectangular composite neural networks (HRCNNs) and synchronized phasor measurements. The FHRCNN presented in this study integrates the paradigm of neural networks with the concept of knowledge-based approaches, rendering them both more useful than when applied alone. The fuzzy rules extracted from the dynamic data relating to the power system formalize the knowledge applied by experts when conducting the voltage risk assessment procedure. The efficiency of the proposed technique is demonstrated via its application to the Taiwan Power Provider System (Tai-Power System) under various operating conditions. Overall, the results indicated that the proposed scheme achieves a minimum 97 % success rate in determining the current voltage security level.

  • CombNET-III with Nonlinear Gating Network and Its Application in Large-Scale Classification Problems

    Mauricio KUGLER  Susumu KUROYANAGI  Anto Satriyo NUGROHO  Akira IWATA  

     
    PAPER-Pattern Recognition

      Page(s):
    286-295

    Modern applications of pattern recognition generate very large amounts of data, which require large computational effort to process. However, the majority of the methods intended for large-scale problems aim to merely adapt standard classification methods without considering if those algorithms are appropriated for large-scale problems. CombNET-II was one of the first methods specifically proposed for such kind of a task. Recently, an extension of this model, named CombNET-III, was proposed. The main modifications over the previous model was the substitution of the expert networks by Support Vectors Machines (SVM) and the development of a general probabilistic framework. Although the previous model's performance and flexibility were improved, the low accuracy of the gating network was still compromising CombNET-III's classification results. In addition, due to the use of SVM based experts, the computational complexity is higher than CombNET-II. This paper proposes a new two-layered gating network structure that reduces the compromise between number of clusters and accuracy, increasing the model's performance with only a small complexity increase. This high-accuracy gating network also enables the removal the low confidence expert networks from the decoding procedure. This, in addition to a new faster strategy for calculating multiclass SVM outputs significantly reduced the computational complexity. Experimental results of problems with large number of categories show that the proposed model outperforms the original CombNET-III, while presenting a computational complexity more than one order of magnitude smaller. Moreover, when applied to a database with a large number of samples, it outperformed all compared methods, confirming the proposed model's flexibility.

  • Cepstral Statistics Compensation and Normalization Using Online Pseudo Stereo Codebooks for Robust Speech Recognition in Additive Noise Environments

    Jeih-weih HUNG  

     
    PAPER-Speech and Hearing

      Page(s):
    296-311

    This paper proposes several cepstral statistics compensation and normalization algorithms which alleviate the effect of additive noise on cepstral features for speech recognition. The algorithms are simple yet efficient noise reduction techniques that use online-constructed pseudo-stereo codebooks to evaluate the statistics in both clean and noisy environments. The process yields transformations for both clean speech cepstra and noise-corrupted speech cepstra, or for noise-corrupted speech cepstra only, so that the statistics of the transformed speech cepstra are similar for both environments. Experimental results show that these codebook-based algorithms can provide significant performance gains compared to results obtained by using conventional utterance-based normalization approaches. The proposed codebook-based cesptral mean and variance normalization (C-CMVN), linear least squares (LLS) and quadratic least squares (QLS) outperform utterance-based CMVN (U-CMVN) by 26.03%, 22.72% and 27.48%, respectively, in relative word error rate reduction for experiments conducted on Test Set A of the Aurora-2 digit database.

  • Interactive Learning of Spoken Words and Their Meanings Through an Audio-Visual Interface

    Naoto IWAHASHI  

     
    PAPER-Speech and Hearing

      Page(s):
    312-321

    This paper presents a new interactive learning method for spoken word acquisition through human-machine audio-visual interfaces. During the course of learning, the machine makes a decision about whether an orally input word is a word in the lexicon the machine has learned, using both speech and visual cues. Learning is carried out on-line, incrementally, based on a combination of active and unsupervised learning principles. If the machine judges with a high degree of confidence that its decision is correct, it learns the statistical models of the word and a corresponding image category as its meaning in an unsupervised way. Otherwise, it asks the user a question in an active way. The function used to estimate the degree of confidence is also learned adaptively on-line. Experimental results show that the combination of active and unsupervised learning principles enables the machine and the user to adapt to each other, which makes the learning process more efficient.

  • Image Restoration for Quantifying TFT-LCD Defect Levels

    Kyu Nam CHOI  No Kap PARK  Suk In YOO  

     
    PAPER-Image Processing and Video Processing

      Page(s):
    322-329

    Though machine vision systems for automatically detecting visual defects, called mura, have been developed for thin flat transistor liquid crystal display (TFT-LCD) panels, they have not yet reached a level of reliability which can replace human inspectors. To establish an objective criterion for identifying real defects, some index functions for quantifying defect levels based on human perception have been recently researched. However, while these functions have been verified in the laboratory, further consideration is needed in order to apply them to real systems in the field. To begin with, we should correct the distortion occurring through the capturing of panels. Distortion can cause the defect level in the observed image to differ from that in the panel. There are several known methods to restore the observed image in general vision systems. However, TFT-LCD panel images have a unique background degradation composed of background non-uniformity and vignetting effect which cannot easily be restored through traditional methods. Therefore, in this paper we present a new method to correct background degradation of TFT-LCD panel images using principal component analysis (PCA). Experimental results show that our method properly restores the given observed images and the transformed shape of muras closely approaches the original undistorted shape.

  • Ubiquitous Home: Retrieval of Experiences in a Home Environment

    Gamhewage C. DE SILVA  Toshihiko YAMASAKI  Kiyoharu AIZAWA  

     
    PAPER-Image Processing and Video Processing

      Page(s):
    330-340

    Automated capture and retrieval of experiences at home is interesting due to the wide variety and personal significance of such experiences. We present a system for retrieval and summarization of continuously captured multimedia data from Ubiquitous Home, a two-room house consisting of a large number of cameras and microphones. Data from pressure based sensors on the floor are analyzed to segment footsteps of different persons. Video and audio handover are implemented to retrieve continuous video streams corresponding to moving persons. An adaptive algorithm based on the rate of footsteps summarizes these video streams. A novel method for audio segmentation using multiple microphones is used for video retrieval based on sounds with high accuracy. An experiment, in which a family lived in this house for twelve days, was conducted. The system was evaluated by the residents who used the system for retrieving their own experiences; we report and discuss the results.

  • Facial Expression Recognition by Supervised Independent Component Analysis Using MAP Estimation

    Fan CHEN  Kazunori KOTANI  

     
    PAPER-Image Recognition, Computer Vision

      Page(s):
    341-350

    Permutation ambiguity of the classical Independent Component Analysis (ICA) may cause problems in feature extraction for pattern classification. Especially when only a small subset of components is derived from data, these components may not be most distinctive for classification, because ICA is an unsupervised method. We include a selective prior for de-mixing coefficients into the classical ICA to alleviate the problem. Since the prior is constructed upon the classification information from the training data, we refer to the proposed ICA model with a selective prior as a supervised ICA (sICA). We formulated the learning rule for sICA by taking a Maximum a Posteriori (MAP) scheme and further derived a fixed point algorithm for learning the de-mixing matrix. We investigate the performance of sICA in facial expression recognition from the aspects of both correct rate of recognition and robustness even with few independent components.

  • Endoscopic Feature Tracking and Scale-Invariant Estimation of Soft-Tissue Structures

    Chia-Hsiang WU  Yung-Nien SUN  Yi-Chiao CHEN  Chien-Chen CHANG  

     
    PAPER-Biological Engineering

      Page(s):
    351-360

    In this study, we introduce a software pipeline to track feature points across endoscopic video frames. It deals with the common problems of low contrast and uneven illumination that afflict endoscopic imaging. In particular, irregular feature trajectories are eliminated to improve quality. The structure of soft tissue is determined by an iterative factorization method based on collection of tracked features. A shape updating mechanism is proposed in order to yield scale-invariant structures. Experimental results show that the tracking method produced good tracking performance and increased the number of tracked feature trajectories. The real scale and structure of the target scene was successfully estimated, and the recovered structure is more accuracy than the conventional method.

  • Area-Time Efficient Modulo 2n-1 Adder Design Using Hybrid Carry Selection

    Su-Hon LIN  Ming-Hwa SHEU  

     
    LETTER-Computer Components

      Page(s):
    361-362

    A new Hybrid-Carry-Selection (HCS) approach for deriving an efficient modulo 2n-1 addition is presented in this study. Its resulting adder architecture is simple and applicable for all n values. Based on 180-nm CMOS technology, the HCS-based modulo 2n-1 adder demonstrates its superiority in Area-Time (AT) performance over existing solutions.

  • New Inter-Cluster Proximity Index for Fuzzy c-Means Clustering

    Fan LI  Shijin DAI  Qihe LIU  Guowei YANG  

     
    LETTER-Data Mining

      Page(s):
    363-366

    This letter presents a new inter-cluster proximity index for fuzzy partitions obtained from the fuzzy c-means algorithm. It is defined as the average proximity of all possible pairs of clusters. The proximity of each pair of clusters is determined by the overlap and the separation of the two clusters. The former is quantified by using concepts of Fuzzy Rough sets theory and the latter by computing the distance between cluster centroids. Experimental results indicate the efficiency of the proposed index.

  • Pathological Voice Detection Using Efficient Combination of Heterogeneous Features

    Ji-Yeoun LEE  Sangbae JEONG  Minsoo HAHN  

     
    LETTER-Speech and Hearing

      Page(s):
    367-370

    Combination of mutually complementary features is necessary to cope with various changes in pattern classification between normal and pathological voices. This paper proposes a method to improve pathological/normal voice classification performance by combining heterogeneous features. Different combinations of auditory-based and higher-order features are investigated. Their performances are measured by Gaussian mixture models (GMMs), linear discriminant analysis (LDA), and a classification and regression tree (CART) method. The proposed classification method by using the CART analysis is shown to be an effective method for pathological voice detection, with a 92.7% classification performance rate. This is a noticeable improvement of 54.32% compared to the MFCC-based GMM algorithm in terms of error reduction.

  • Effects of the Temporal Fine Structure in Different Frequency Bands on Mandarin Tone Perception

    Lin YANG  Jianping ZHANG  Jian SHAO  Yonghong YAN  

     
    LETTER-Speech and Hearing

      Page(s):
    371-374

    This letter evaluates the relative contributions of temporal fine structure cues in various frequency bands to Mandarin tone perception using novel "auditory chimaeras". Our results confirm the importance of temporal fine structure cues to lexical tone perception and the dominant region of lexical tone perception is found, namely the second to fifth harmonics can contribute no less than the fundamental frequency itself.

  • Color Constancy Based on Image Similarity

    Bing LI  De XU  Jin-Hua WANG  Rui LU  

     
    LETTER-Image Recognition, Computer Vision

      Page(s):
    375-378

    Computational color constancy is a classical problem in computer vision. It is an under-constrained problem, which can be solved based on some constraint. Existing algorithms can be divided into two groups: physics-based algorithms and statistics-based approaches. In this paper, we propose a new hypothesis that the images generated under a same illumination have some similar features. Based on this hypothesis, a novel statistics-based color constancy algorithm is given and a new similarity function between images is also defined. The experimental results show that our algorithm is effective and it is more important that the dimension of the features in our algorithm is much lower than many former statistics-based algorithms.

  • Design of Time-Varying Reverberators for Low Memory Applications

    Tacksung CHOI  Young-Cheol PARK  Dae-Hee YOUN  

     
    LETTER-Music Information Processing

      Page(s):
    379-382

    Development of an artificial reverberator for low-memory requirements is an issue of importance in applications such as mobile multimedia devices. One possibility is to use an All-Pass Filter (APF), which is embedded in the feedback loop of the comb filter network. In this paper, we propose a reverberator employing time-varying APFs to increase the reverberation performance. By changing the gain of the APF, we can increase the number of frequency peaks perceptually. Thus, the resulting reverberation sounds much more natural, even with less memory, than the conventional approach. In this paper, we perform theoretical and perceptual analyses of artificial reverberators employing time-varying APF. Through the analyses, we derive the degree of phase variation of the APF that is perceptually acceptable. Based on the analyses, we propose a method of designing artificial reverberators associated with the time-varying APFs. Through subjective tests, it is shown that the proposed method is capable of providing perceptually comparable sound quality to the conventional methods even though it uses less memory.