The search functionality is under construction.
The search functionality is under construction.

IEICE TRANSACTIONS on Information

  • Impact Factor

    0.59

  • Eigenfactor

    0.002

  • article influence

    0.1

  • Cite Score

    1.4

Advance publication (published online immediately after acceptance)

Volume E78-D No.1  (Publication Date:1995/01/25)

    Regular Section
  • FOREWORD

    Yoshiaki SHIRAI  Kenichiro ISHII  

     
    FOREWORD

      Page(s):
    1-2
  • Efficient Dynamic Job Scheduling Algorithms for Multiprocessor Systems

    Saptarshi MAHESH  C. Siva Ram MURTHY  C. Pandu RANGAN  

     
    PAPER-Algorithm and Computational Complexity

      Page(s):
    3-12

    Exploiting the full potential of a multiprocessor system requires a good job scheduling algorithm. In this paper we analyze three dynamic job scheduling algorithms in multiprocessor systems. These algorithms are based on static job scheduling algorithms, LPT (longest processing time first), SJF (shortest job first), and LPR (largest processor requirement first), each of which exhibits good performance in terms of asymptotic upper bound on the makespan of the schedule generated by it. We analyze their performance in the dynamic case experimentally, where we have a stochastic stream of jobs with arbitrary processing time and processor requirement. We compare their performance with the FCFS algorithm and its simple extension. Except for LPT, the algorithms are found to perform significantly better than FCFS, while among themselves SJF performs the best, followed by K-LPR, a variation of LPR. We also consider the fairness aspect of these algorithms and propose a general technique to impose fairness on these algorithms. Finally, we analyze the impact of imposing fairness on the performance of these algorithms.

  • Complexity of Finding Alphabet Indexing

    Shinichi SHIMOZONO  Satoru MIYANO  

     
    PAPER-Algorithm and Computational Complexity

      Page(s):
    13-18

    For two finite disjoint sets P and Q of strings over an alphabet Σ, an alphabet indexing for P, Q by an indexing alphabet Γ with |Γ||Σ| is a mapping Γ satisfying (P)(Q), where *Γ* is the homomorphism derived from . We defined this notion through experiments of knowledge acquisition from amino acid sequences of proteins by learning algorithms. This paper analyzes the complexity of finding an alphabet indexing. We first show that the problem is NP-complete. Then we give a local search algorithm for this problem and show a result on PLS-completeness.

  • Unification-Failure Filter for Natural Language

    Alfredo M. MAEDA  Hideto TOMABECHI  Jun-ichi AOE  

     
    PAPER-Software Systems

      Page(s):
    19-26

    Graph unification is doubtlessly the most expensive process in unification-based grammar parsing since it takes the vast majority of the total parsing time of natural language sentences. A parsing time overload in unification consists in that, in general, no less than 60% of the graph unifications performed actually fail. Thus one way to achieve unification time speed-up is focusing on an efficient, fast way to deal with such unification failures. In this paper, a process, prior to unification itself, capable of filtering or stopping a considerably high percentage of graphs that would fail unification is proposed. This unification-filtering process consists of comparison of signatures that correspond to each one of the graphs to be unified. Unification-filter (hereafter UF) is capable of stopping around 87% of the non-unifiable graphs before unification itself takes place. UF takes significantly less time to detect graphs that do not unify and discard them than it would take to unification to fail the attempt to unify the same graphs. As a result of using UF, unification is performed in an around 71% of the time for the fastest known unification algorithm.

  • Process Composition and Interleave Reduction in Parallel Process Specification

    Makoto TSUJIGADO  Teruo HIKITA  Jun GINBAYASHI  

     
    PAPER-Software Systems

      Page(s):
    27-36

    In formal specification languages for parallel processes, such as CSP and LOTOS, algebraic laws for basic operators are provided that can be used to transform process expressions, and in particular, composition of processes can be calculated using these laws. Process composition can be used to simplify and improve the specification, and also to prove properties of the specification such as deadlock absence. We here test the practicality of process composition using CSP and suggest useful techniques, working in an example with nontrivial size and complexity. We emphasize that the size explosion of composed processes, caused by interleaving of the events of component processes, is a serious problem. Then we propose a technique, which we name two-way pipe, that can be used to reduce the size of the composed process, regarded as a program optimization at specification level.

  • Calculation of Exact Statistics on Directional Data in the 2-D Space

    Hajimu KAWAKAMI  

     
    PAPER-Image Processing, Computer Graphics and Pattern Recognition

      Page(s):
    37-48

    This paper describes a new algorithm for calculating exact statistics on directional data and its application to pattern processing. Although information about directional characteristics is practically useful in image processing, e.g. texture analysis or color segmentation, dominant information is not always extracted as exact statistics on directional data. The main reason is concerned with periodicity inherent in directional data. For example, an expectation of a random variable X is defined as ∫xp(x)dx, where p(x) is a probability density function of X; therefore, when a random direction D is distributed only at 170[]and 170[] with same probability density, the expectation of D leads to 0[] if nothing about the periodicity is considered. We would, however, expect that the exact expectation of D should be 180[]. To overcome the problem, we, at first, define a directional distance in such a form that can introduce the periodicity. Then, we propose an idea of defining directional statistics by a problem of minimizing an arithmetic mean of squared directional distances to each sample direction. Because the periodicity is introduced to the directional distance definition, the directional statistics are calculated as the exact statistics on directional data. Although the introduced periodicity might cause the minimization to be complex, we can compensate the complexity by introducing recurrence formulas; consequently, dominant information can efficiently be extracted as the directional statistics from those data. Experiments on their applications to pattern processing show that the proposed algorithm works well in detecting (1) divergent points of distorted vector field patterns with noise and (2) moving directions from translational movement vector fields.

  • A Segmentation Method for Sign Language Recognition

    Eiji OHIRA  Hirohiko SAGAWA  Tomoko SAKIYAMA  Masaru OHKI  

     
    PAPER-Image Processing, Computer Graphics and Pattern Recognition

      Page(s):
    49-57

    This paper discusses sign word segmentation methods and extraction of motion features for sign language recognition. Because Japanese sign language grammar has not yet been systematized and because sign language does not have prepositions, it is more difficult to use grammar and meaning information in sign language recognition than in speech recognition. Segmentation significantly improves recognition efficiency, so we propose a method of dividing sign language based on rests and on the envelope and minimum of motion speed. The sign unit corresponding to a sign word is detected based on the divided position using such features as the change of hand shape. Experiments confirmed the validity of word segmentation of sign language based on the temporal structure of motion.

  • An Extended Centering Mechanism for Interpreting Pronouns and Zero-Pronouns

    Shingo TAKADA  Norihisa DOI  

     
    PAPER-Artificial Intelligence and Cognitive Science

      Page(s):
    58-67

    Zero-pronouns and overt pronouns occur frequently in Japanese text. These must be interpreted by recognizing their antecedents to properly understand' a piece of discourse. The notion of centering" has been used to help in the interpretation process for intersentential anaphors. This is based on the premise that in a piece of discourse, some members have a greater amount of attention put on it than other members. In Japanese, the zero-pronoun is said to have the greatest amount of attention put on it. But, when there are more than one zero-pronoun in a sentence, only one of them would be accountable using centering. Overt pronouns and any other zero-pronouns may as well have appeared as ordinary' noun phrases. In this paper, the notion of centering has been extended so that these can also be interpreted. Basically, zero-pronouns and overt pronouns are treated as being more centered" in the discourse than other ordinary' noun phrases. They are put in an ordered list called the Center List. Any other noun phrases appearing in a sentence are put in another list called the Possible Center List. Noun phrases within both lists are ordered according to their degrees of salience. To see the effect of our approach, it was implemented in a simple system with minimal constraints and evaluated. The result showed that when the antecedent is in either the Center List or the Possible Center List, 80% of all zero-pronouns and overt pronouns were properly interpreted.

  • Automatic Alignment of Japanese-Chinese Bilingual Texts

    Chew Lim TAN  Makoto NAGAO  

     
    PAPER-Artificial Intelligence and Cognitive Science

      Page(s):
    68-76

    Automatic alignment of bilingual texts is useful to example-based machine translation by facilitating the creation of example pairs of translation for the machine. Two main approaches to automatic alignment have been reported in the literature. They are lexical approach and statistical approach. The former looks for relationships between lexical contents of the bilingual texts in order to find alignment pairs, while the latter uses statistical correlation between sentence lengths of the bilingual texts as the basis of matching. This paper describes a combination of the two approaches in aligning Japanese-Cinese bilingual texts by allowing kanji contents and sentence lengths in the texts to work together in achieving an alignment process. Because of the sentential structure differences between Japanese and Chinese, matching at the sentence level may result in frequent matching between a number of sentences en masses. In view of this, the current work also attempts to create shorter alignment pairs by permitting sentences to be matched with clauses or phrases of the other text if possible. While such matching is more difficult and error-prone, the reliance on kanji contents has proven to be very useful in minimizing the errors. The current research has thus found solutions to problems that are unique to the present work.

  • Detection of the K-Complex Using a New Method of Recognizing Waveform Based on the Discrete Wavelet Transform

    Zhengwei TANG  Naohiro ISHII  

     
    PAPER-Bio-Cybernetics and Neurocomputing

      Page(s):
    77-85

    In this paper a method of recognizing waveform based on the Discrete Wavelet Transform (DWT) presented by us is applied to detecting the K-complex in human's EEG which is a slow wave overridden by fast rhythms (called as spindle). The features of K-complex are extracted in terms of three parameters: the local maxima of the wavelet transform modulus, average slope and the number of DWT coefficients in a wave. The 4th order B-spline wavelet is selected as the wavelet basis. Two channels at different resolutions are used to detect slow wave and sleep spindle contained in the K-complex. According to the principle of the minimum distance classification the classifiers are designed in order to decide the thresholds of recognition criteria. The EEG signal containing K-complexes elicited by sound stimuli is used as pattern to train the classifiers. Compared with traditional method of waveform recognition in time domain, this method has the advantage of automatically classifying duration ranks of various waves with different frequencies. Hence, it specially is suitable to recognition of signals which are the superimposition of waves with different frequencies. The experimental results of detection of K-complexes indicate that the method is effective.

  • On the Negation-Limited Circuit Complexity of Clique Functions

    Tetsuro NISHINO  Keisuke TANAKA  

     
    LETTER-Algorithm and Computational Complexity

      Page(s):
    86-89

    A negation-limited circuit is a combinational circuit which includes at most [log(n1)] NOT gates. We show a relationship between the size of negation-limited circuits computing clique functions and the number of NOT gates in the circuits.

  • On the Number of Negations Needed to Compute Parity Functions

    Tetsuro NISHINO  Jaikumar RADHAKRISHNAN  

     
    LETTER-Algorithm and Computational Complexity

      Page(s):
    90-91

    We exactly determine the number of negations needed to compute the parity functions and the complement of the parity functions. We show that with k NOT gates, parity can be computed on at most 2k+11 variables, and parity complement on at most 2k+12 variables. The two bounds are shown to be tight.