The search functionality is under construction.
The search functionality is under construction.

IEICE TRANSACTIONS on Information

  • Impact Factor

    0.59

  • Eigenfactor

    0.002

  • article influence

    0.1

  • Cite Score

    1.4

Advance publication (published online immediately after acceptance)

Volume E77-D No.2  (Publication Date:1994/02/25)

    Special Issue on Natural Language Processing and Understanding
  • FOREWORD

    Hozumi TANAKA  

     
    FOREWORD

      Page(s):
    159-160
  • A Preferential Constraint Satisfaction Technique for Natural Language Analysis

    Katashi NAGAO  

     
    PAPER

      Page(s):
    161-170

    In this paper, we present a new technique for the semantic analysis of sentences, including an ambiguity-packing method that generates a packed representation of individual syntactic and semantic structures. This representation is based on a dependency structure with constraints that must be satisfied in the syntax-semantics mapping phase. Complete syntax-semantics mapping is not performed until all ambiguities have been resolved, thus avoiding the combinatorial explosions that sometimes occur when unpacking locally packed ambiguities. A constraint satisfaction technique makes it possible to resolve ambiguities efficiently without unpacking. Disambiguation is the process of applying syntactic and semantic constraints to the possible candidate solutions (such as modifiees, cases, and wordsenses) and removing unsatisfactory condidates. Since several candidates often remain after applying constraints, another kind of knowledge to enable selection of the most plausible candidate solution is required. We call this new knowledge a preference. Both constraints and preferences must be applied to coordination for disambiguation. Either of them alone is insufficient for the purpose, and the interactions between them are important. We also present an algorithm for controlling the interaction between the constraints and the preferences in the disambiguation process. By allowing the preferences to control the application of the constraints, ambiguities can be efficiently resolved, thus avoiding combinatorial explosions.

  • cu-Prolog for Constraint-Based Natural Language Processing

    Hiroshi TSUDA  

     
    PAPER

      Page(s):
    171-180

    This paper introduces a constraint logic programming (CLP) language cu-Prolog as an implementation framework for constraint-based natural language processing. Compared to other CLP languages, cu-Prolog has several unique features. Most CLP languages take algebraic equations or inequations as constraints. cu-Prolog, on the other hand, takes Prolog atomic formulas in terms of user-defined predicates. cu-Prolog, thus, can describe symbolic and combinatorial constraints occurring in the constraint-based grammar formalisms. As a constraint solver, cu-Prolog uses the unfold/fold transformation, which is well known as a program transformation technique, dynamically with some heuristics. To treat the information partiality described with feature structures, cu-Prolog uses PST (Partially Specified Term) as its data structure. Sections 1 and 2 give an introduction to the constraint-based grammar formalisms on which this paper is based and the outline of cu-Prolog is explained in Sect. 3 with implementation issues described in Sect. 4. Section 5 illustrates its linguistic application to disjunctive feature structure (DFS) and parsing constraint-based grammar formalisms such as Japanese Phrase Structure Grammar (JPSG). In either application, a disambiguation process is realized by transforming constraints, which gives a picture of constraint-based NLP.

  • Japanese Sentence Generation Grammar Based on the Pragmatic Constraints

    Kyoko KAI  Yuko DEN  Yasuharu DEN  Mika OBA  Jun-ichi NAKAMURA  Sho YOSHIDA  

     
    PAPER

      Page(s):
    181-191

    Naturalness of expressions reflects various pragmatic factors in addition to grammatical factors. In this paper, we discuss relations between expressions and two pragmatic factors: a point fo view of speaker and a hierarchical relation among participants. Degree of empathy" and class" is used to express these pragmatic factors as one-dimensional notion. Then inequalities and equalities of them become conditions for selecting natural expressions. The authors of this paper formulate conditions as principles about lexical and syntactical constraints, and have implemented a sentence generation grammar using the unification grammar formalism.

  • Multiple World Representation of Mental States for Dialogue Processing

    Toru SUGIMOTO  Akinori YONEZAWA  

     
    PAPER

      Page(s):
    192-208

    As a general basis for constructing a cooperative and flexible dialogue system, we are interested in modelling the inference process of an agent who participates in a dialogue. For this purpose, it is natural and powerful to model it in his general cognitive framework for problem solving. This paper presents such a framework. In this framework, we represent agent's mental states in the form called Mental World Structure, which consists of multiple mental worlds. Each mental world is a set of mental propositions and corresponds to one modal context, that is, a specific point of view. Modalities in an agent's mental states are represented by path expressions, which are first class citizens of the system and can be composed each other to make up composite modalities. With Mental World Structure, we can handle modalities more flexibly than ordinary modal logics, situation theory and other representation systems. We incorporate smoothly into the structure three basic inference procedures, that is, deduction, abduction and truth maintenance. Precise definitions of the structure and the inference procedures are given. Furthermore, we explain as examples, several cooperative dialogues in our framework.

  • A Logical Model for Plan Recognition and Belief Revision

    Katashi NAGAO  

     
    PAPER

      Page(s):
    209-217

    In this paper, we present a unified model for dialogue understanding involving various sorts of ambiguities, such as lexical, syntactic, semantic, and plan ambiguities. This model is able to estimate and revise the most preferable interpretation of utterances as a dialogue progresses. The model's features successfully capture the dynamic nature of dialogue management. The model consists of two main portions: (1) an extension of first-order logic for maintaining multiple interpretations of ambiguous utterances in a dialogue; (2) a device which estimates and revises the most preferable interpretation from among these multiple interpretations. Since the model is logic-based, it provides a good basis for formulating a rational justification of its current interpretation, which is one of the most desirable aspects in generating helpful responses. These features (contained in our model) are extremely useful for interactive dialogue management.

  • A Family of Generalized LR Parsing Algorithms Using Ancestors Table

    Hozumi TANAKA  K.G. SURESH  Koichi YAMADA  

     
    PAPER

      Page(s):
    218-226

    A family of new generalized LR parsing algorithms are proposed which make use of a set of ancestors tables introduced by Kipps. As Kipps's algorithm does not give us a method to extract any parsing results, his algorithm is not considered as a practical parser but as a recognizer. In this paper, we will propose two methods to extract all parse trees from a set of ancestors tables in the top vertices of a graph-structured stack. For an input sentence of length n, while the time complexity of the Tomita parser can exceed O(n3) for some context-free grammars (CFGs), the time complexity of our parser is O(n3) for any CFGs, since our algorithm is based on the Kipps's recognizer. In order to extract a parse tree from a set of ancestors tables, it takes time in order n2. Some preliminary experimental results are given to show the efficiency of our parsers over Tomita parser.

  • A Method of Case Structure Analysis for Japanese Sentences Based on Examples in Case Frame Dictionary

    Sadao KUROHASHI  Makoto NAGAO  

     
    PAPER

      Page(s):
    227-239

    A case structure expression is one of the most important forms to represent the meaning of the sentence. Case structure analysis is usually performed by consulting case frame information in a verb dictionary. However, this analysis is very difficult because of several problems, such as word sense ambiguity and structural ambiguity. A conventional method for solving these problems is to use the method of selectional restriction, but this method has a drawback in the semantic marker (SM) method --the trade-off between descriptive power and construction cost. In this paper, we propose a method of case structure analysis based on examples in case frame dictionary This method uses the case frame dictionary which has some typical example sentences for each case frame, and it selects a proper case frame for an input sentence by matching the input sentence with the examples in the case frame dictionary. The best matching score, which is utilized for selecting a proper case frame for a predicate, can be considered as the score for the case structure of the predicate. Therefore, when there are two or more readings for a sentence because of structural ambiguity, the best reading of a sentence can be selected by evaluating the sum of the scores for the case structures of all predicates in a sentence. We report on experiments which shows that this method is superior to the conventional, coarse-grained SM method, and also describe the superiority of the example-based method over the SM method.

  • Example-Based Word-Sense Disambiguation

    Naohiko URAMOTO  

     
    PAPER

      Page(s):
    240-246

    This paper presents a new method for resolving lexical (word sense) ambiguities inherent in natural language sentences. The Sentence Analyzer (SENA) was developed to resolve such ambiguities by using constraints and example-based preferences. The ambiguities are packed into a single dependency structure, and grammatical and lexical constraints are applied to it in order to reduce the degree of ambiguity. The application of constraints is realized by a very effective constraint-satisfaction technique. Remaining ambiguities are resolved by the use of preferences calculated from an example-base, which is a set of fully parsed word-to-word dependencies acquired semi-automatically from on-line dictionaries.

  • A Transfer System Using Example-Based Approach

    Hideo WATANABE  Hiroshi MARUYAMA  

     
    PAPER

      Page(s):
    247-257

    This paper proposes a new type of transfer system, called Similarity-driven Transfer System (or SimTran), which uses an example-based approach to the transfer phase of MT. In this paper, we describe a method for calculating similarity, a method for searching the most appropriate set of translation rules, and a method for constructing an output structure from those selected rules. Further, we show that SimTran can use not only translation examples but also syntax-based translation rules used in conventional transfer systems at the same time.

  • Spoken Sentence Recognition Based on HMM-LR with Hybrid Language Modeling

    Kenji KITA  Tsuyoshi MORIMOTO  Kazumi OHKURA  Shigeki SAGAYAMA  Yaneo YANO  

     
    PAPER

      Page(s):
    258-265

    This paper describes Japanese spoken sentence recognition using hybrid language modeling, which combines the advantages of both syntactic and stochastic language models. As the baseline system, we adopted the HMM-LR speech recognition system, with which we have already achieved good performance for Japanese phrase recognition tasks. Several improvements have been made to this system aimed at handling continuously spoken sentences. The first improvement is HMM training with continuous utterances as well as word utterances. In previous implementations, HMMs were trained with only word utterances. Continuous utterances are included in the HMM training data because coarticulation effects are much stronger in continuous utterances. The second improvement is the development of a sentential grammar for Japanese. The sentential grammar was created by combining inter- and intra-phrase CFG grammars, which were developed separately. The third improvement is the incorporation of stochastic linguistic knowledge, which includes stochastic CFG and a bigram model of production rules. The system was evaluated using continuously spoken sentences from a conference registration task that included approximately 750 words. We attained a sentence accuracy of 83.9% in the speaker-dependent condition.