The search functionality is under construction.

IEICE TRANSACTIONS on Information

  • Impact Factor

    0.59

  • Eigenfactor

    0.002

  • article influence

    0.1

  • Cite Score

    1.4

Advance publication (published online immediately after acceptance)

Volume E85-D No.9  (Publication Date:2002/09/01)

    Regular Section
  • Interval Arithmetic Operations in Residue Number System

    Ki Ja LEE  

     
    PAPER-Algorithms

      Page(s):
    1361-1371

    Algorithms are presented for the four elementary arithmetic operations, to perform reliable floating-point arithmetic operations. These arithmetic operations can be achieved by applying residue techniques to the weighted number systems and performed with no accuracy lost in the process of the computing. The arithmetic operations presented can be used as elementary tools (on many existing architectures) to ensure the reliability of numerical computations. Simulation results especially for the solutions of ill-conditioned problems are given with emphasis on the practical usability of the tools.

  • An Extension of Shortcut Deforestation for Accumulative List Folding

    Kazuhiko KAKEHI  Robert GLUCK  Yoshihiko FUTAMURA  

     
    PAPER-Theory and Models of Software

      Page(s):
    1372-1383

    Deforestation is a well-known program transformation technique which eliminates intermediate data structures that are passed between functions. One of its weaknesses is the inability to deforest programs using accumulating parameters. We show how certain kinds of intermediate lists produced by accumulating parameters can be deforested. In this paper we introduce an accumulative variant of foldr, called rdlof, and show the composition of functions defined by foldr and rdlof. As a simplified instance of foldr and rdlof, we then examine dmap, an accumulative extension of map, and give the corresponding fusion rules. While the associated composition rules cannot capture all deforestation problems, they can handle accumulator fusion of fold- and map-style functions in a simple manner. The rules for accumulator fusion presented here can also be viewed as a restricted composition scheme for attribute grammars, which in turn may help us to bridge the gap between the attribute and functional worlds.

  • An Efficient Kerberos Authentication Mechanism Associated with X.509 and DNS (Domain Name System)

    Cheolhyun KIM  Ilyong CHUNG  

     
    PAPER-Applications of Information Security Techniques

      Page(s):
    1384-1389

    Since any suggestion to regional services are not described in Kerberos, authentication between regions can be performed via PKINIT (Public Key Cryptography for Initial Authentication) presented by IETF (Internet Engineering Task Force) CAT working group. In this paper, an efficient Kerberos authentication mechanism associated with X.509 and Domain Name system (DNS) is presented by employing the two distinct key management systems - asymmetric and symmetric methods. A new protocol is better than the authentication mechanism proposed by IETF CAT Working group in terms of communication complexity.

  • An Efficient Indexing Structure and Image Representation for Content-Based Image Retrieval

    Hun-Woo YOO  Dong-Sik JANG  Yoon-Kyoon NA  

     
    PAPER-Image Processing, Image Pattern Recognition

      Page(s):
    1390-1398

    In this paper, we present the following schemes for a content-based image search: (1) A fast image search algorithm that can significantly reduce similarity calculation compared to a full comparison of every database image. (2) A compact image representation scheme that can describe the global/local information of the images and provide successful retrieval performance. For fast searches, a tree is constructed by successfully dividing nodes into the desired depth level by working from the root to the leaf nodes using the k-means algorithm. When the query is completed, we traverse the tree top-down by minimizing the route taken between the query image and node centroid until we meet the undivided nodes. Within undivided nodes, the algorithm of triangle inequality is used to find the images most similar to the query. For compact image representation, RGB color histogram features which are quantized into 16 bins each of the R, G, and B channels are used for global information. Dominant hue, saturation, and value which are extracted from the HSV joint histogram in the localized regions within the image are used for local information. These features are sufficiently compact to index image features in large database systems. For experiments on the retrieval efficiency, the use of the proposed method provided substantial performance benefits by reducing the image similarity calculation up to an average of a 96% and for experiments on the retrieval effectiveness, in the best case, it provide a 36.8% recall rate for a whale query image and a 100% precision rate for an eagle query image. The overall performance was a 20.0% recall rate and a 72.5% precision rate.

  • Polyhedral Description of Panoramic Range Data by Stable Plane Extraction

    Caihua WANG  Hideki TANAHASHI  Hidekazu HIRAYU  Yoshinori NIWA  Kazuhiko YAMAMOTO  

     
    PAPER-Image Processing, Image Pattern Recognition

      Page(s):
    1399-1408

    In this paper, we describe a novel technique to extract a polyhedral description from panoramic range data of a scene taken by a panoramic laser range finder. First, we introduce a reasonable noise model of the range data acquired with a laser radar range finder, and derive a simple and efficient approximate solution of the optimal fitting of a local plane in the range data under the assumed noise model. Then, we compute the local surface normals using the proposed method and extract stable planar regions from the range data by using both the distribution information of local surface normals and their spatial information in the range image. Finally, we describe a method which builds a polyhedral description of the scene using the extracted stable planar regions of the panoramic range data with 360 field of view in a polar coordinate system. Experimental results on complex real range data show the effectiveness of the proposed method.

  • Image Compression Algorithms Based on Side-Match Vector Quantizer with Gradient-Based Classifiers

    Zhe-Ming LU  Bian YANG  Sheng-He SUN  

     
    PAPER-Image Processing, Image Pattern Recognition

      Page(s):
    1409-1415

    Vector quantization (VQ) is an attractive image compression technique. VQ utilizes the high correlation between neighboring pixels in a block, but disregards the high correlation between the adjacent blocks. Unlike VQ, side-match VQ (SMVQ) exploits codeword information of two encoded adjacent blocks, the upper and left blocks, to encode the current input vector. However, SMVQ is a fixed bit rate compression technique and doesn't make full use of the edge characteristics to predict the input vector. Classified side-match vector quantization (CSMVQ) is an effective image compression technique with low bit rate and relatively high reconstruction quality. It exploits a block classifier to decide which class the input vector belongs to using the variances of neighboring blocks' codewords. As an alternative, this paper proposes three algorithms using gradient values of neighboring blocks' codewords to predict the input block. The first one employs a basic gradient-based classifier that is similar to CSMVQ. To achieve lower bit rates, the second one exploits a refined two-level classifier structure. To reduce the encoding time further, the last one employs a more efficient classifier, in which adaptive class codebooks are defined within a gradient-ordered master codebook according to various prediction results. Experimental results prove the effectiveness of the proposed algorithms.

  • Automatic Detection of Mis-Spelled Japanese Expressions Using a New Method for Automatic Extraction of Negative Examples Based on Positive Examples

    Masaki MURATA  Hitoshi ISAHARA  

     
    PAPER-Natural Language Processing

      Page(s):
    1416-1424

    We developed a method for extracting negative examples when only positive examples are given as supervised data. This method calculates the probability of occurrence of an input example, which should be judged to be positive or negative. It considers an input example that has a high probability of occurrence but does not appear in the set of positive examples as a negative example. We used this method for one of important tasks in natural language processing: automatic detection of misspelled Japanese expressions. The results showed that the method is effective. In this study, we also described two other methods we developed for the detection of misspelled expressions: a combined method and a "leaving-one-out" method. In our experiments, we found that these methods are also effective.

  • Labeling Q-Learning in POMDP Environments

    Haeyeon LEE  Hiroyuki KAMAYA  Kenichi ABE  

     
    PAPER-Biocybernetics, Neurocomputing

      Page(s):
    1425-1432

    This paper presents a new Reinforcement Learning (RL) method, called "Labeling Q-learning (LQ-learning)," to solve the partially obervable Markov Decision Process (POMDP) problems. Recently, hierarchical RL methods are widely studied. However, they have the drawback that the learning time and memory are exhausted only for keeping the hierarchical structure, though they wouldn't be necessary. On the other hand, our LQ-learning has no hierarchical structure, but adopts a new type of internal memory mechanism. Namely, in the LQ-learning, the agent percepts the current state by pair of observation and its label, and then, the agent can distinguish states, which look as same, but obviously different, more exactly. So to speak, at each step t, we define a new type of perception of its environment õt=(ott), where ot is conventional observation, and θt is the label attached to the observation ot. Then the classical RL-algorithm is used as if the pair (ott) serves as a Markov state. This labeling is carried out by a Boolean variable, called "CHANGE," and a hash-like or mod function, called Labeling Function (LF). In order to demonstrate the efficiency of LQ-learning, we will apply it to "maze problems" in Grid-Worlds, used in many literatures as POMDP simulated environments. By using the LQ-learning, we can solve the maze problems without initial knowledge of environments.

  • Incremental Construction of Projection Generalizing Neural Networks

    Masashi SUGIYAMA  Hidemitsu OGAWA  

     
    PAPER-Biocybernetics, Neurocomputing

      Page(s):
    1433-1442

    In many practical situations in NN learning, training examples tend to be supplied one by one. In such situations, incremental learning seems more natural than batch learning in view of the learning methods of human beings. In this paper, we propose an incremental learning method in neural networks under the projection learning criterion. Although projection learning is a linear learning method, achieving the above goal is not straightforward since it involves redundant expressions of functions with over-complete bases, which is essentially related to pseudo biorthogonal bases (or frames). The proposed method provides exactly the same learning result as that obtained by batch learning. It is theoretically shown that the proposed method is more efficient in computation than batch learning.

  • Effects of Nonuniform Acoustic Fields in Vessels and Blood Velocity Profiles on Doppler Power Spectrum and Mean Blood Velocity

    Dali ZHANG  Yoji HIRAO  Yohsuke KINOUCHI  Hisao YAMAGUCHI  Kazuo YOSHIZAKI  

     
    PAPER-Medical Engineering

      Page(s):
    1443-1451

    This paper presents a detailed simulation method to estimate Doppler power spectrum and mean blood velocity using real CW Doppler transducers with twin-crystal arrangement. The method is based on dividing the sample volume into small cells and using the statistics of the Doppler power spectrum with the same Doppler shift frequency, which predicts the mean blood velocity. The acoustic fields of semicircular transducers across blood vessels were calculated and the effects of acoustical and physiological factors on Doppler power spectrum and mean blood velocity were analyzed. Results show that nonuniformity of the acoustic field of the ultrasonic beam in the blood vessel and blood velocity profiles significantly affect Doppler power spectrum and mean blood velocity. However, Doppler angle, vessel depth, and sample volume length are not sensitive functions. Comparisons between simulation and experimental results illustrated a good agreement for parabolic flow profile. These results will contribute to a better understanding of Doppler power spectrum and mean blood velocity in medical ultrasound diagnostics.

  • Duration Modeling Using Cumulative Duration Probability

    Tae-Young YANG  Chungyong LEE  Dae-Hee YOUN  

     
    LETTER-Speech and Hearing

      Page(s):
    1452-1454

    A duration modeling technique is proposed for the HMM based connected digit recognizer. The proposed duration modeling technique uses a cumulative duration probability. The cumulative duration probability is defined as the partial sum of the duration probabilities which can be estimated from the training speech data. Two approaches of using it are presented. First, the cumulative duration probability is used as a weighting factor to the state transition probability of HMM. Second, it replaces the conventional state transition probability. In both approaches, the cumulative duration probability is combined directly to the Viterbi decoding procedure. A modified Viterbi decoding procedure is also presented. One of the advantages of the proposed duration modeling technique is that the cumulative duration probability rules the transitions of states and words at each frame. Therefore, an additional post-procedure is not required. The proposed technique was examined by recognition experiments on Korean connected digit. Experimental results showed that two approach achieved almost same performances and that the average recognition accuracy was enhanced from 83.60% to 93.12%.

  • Extraction of Texture Regions Using Region Based Local Correlation

    Sang Yong SEO  Chae Whan LIM  Nam Chul KIM  

     
    LETTER-Image Processing, Image Pattern Recognition

      Page(s):
    1455-1457

    We present an efficient algorithm using a region-based texture feature for the extraction of texture regions. The key idea of this algorithm is to use the variations of local correlation coefficients (LCCs) according to different orientations to classify texture and shade regions. Experimental results show that the proposed feature suitably extracts the regions that appear visually as texture regions.