The search functionality is under construction.
The search functionality is under construction.

Keyword Search Result

[Keyword] dependency(56hit)

41-56hit(56hit)

  • Two-Phase S-Clause Segmentation

    Mi-Young KIM  Jong-Hyeok LEE  

     
    PAPER-Natural Language Processing

      Vol:
    E88-D No:7
      Page(s):
    1724-1736

    When a dependency parser analyzes long sentences with fewer subjects than predicates, it is difficult for it to recognize which predicate governs which subject. To handle such syntactic ambiguity between subjects and predicates, we define an "a subject clause (s-clause)" as a group of words containing several predicates and their common subject. This paper proposes a two-phase method for S-clause segmentation. The first phase reduces the number of candidates of S-clause boundaries, and the second performs S-clause segmentation using decision trees. In experimental evaluation, the S-clause information turned out to be effective for determining the governor of a subject and that of a predicate in dependency parsing. Further syntactic analysis using S-clauses achieved an improvement in precision of 5 percent.

  • On Dependency Pair Method for Proving Termination of Higher-Order Rewrite Systems

    Masahiko SAKAI  Keiichirou KUSAKARI  

     
    PAPER-Computation and Computational Models

      Vol:
    E88-D No:3
      Page(s):
    583-593

    This paper explores how to extend the dependency pair technique for proving termination of higher-order rewrite systems. In the first order case, the termination of term rewriting systems are proved by showing the non-existence of an infinite R-chain of the dependency pairs. However, the termination and the non-existence of an infinite R-chain do not coincide in the higher-order case. We introduce a new notion of dependency forest that characterize infinite reductions and infinite R-chains, and show that the termination property of higher-order rewrite systems R can be checked by showing the non-existence of an infinite R-chain, if R is strongly linear or non-nested.

  • Robust Dependency Parsing of Spontaneous Japanese Spoken Language

    Tomohiro OHNO  Shigeki MATSUBARA  Nobuo KAWAGUCHI  Yasuyoshi INAGAKI  

     
    PAPER-Speech Corpora and Related Topics

      Vol:
    E88-D No:3
      Page(s):
    545-552

    Spontaneously spoken Japanese includes a lot of grammatically ill-formed linguistic phenomena such as fillers, hesitations, inversions, and so on, which do not appear in written language. This paper proposes a novel method of robust dependency parsing using a large-scale spoken language corpus, and evaluates the availability and robustness of the method using spontaneously spoken dialogue sentences. By utilizing stochastic information about the appearance of ill-formed phenomena, the method can robustly parse spoken Japanese including fillers, inversions, or dependencies over utterance units. Experimental results reveal that the parsing accuracy reached 87.0%, and we confirmed that it is effective to utilize the location information of a bunsetsu, and the distance information between bunsetsus as stochastic information.

  • Higher-Order Path Orders Based on Computability

    Keiichirou KUSAKARI  

     
    PAPER

      Vol:
    E87-D No:2
      Page(s):
    352-359

    Simply-typed term rewriting systems (STRSs) are an extension of term rewriting systems. STRSs can be naturally handle higher order functions, which are widely used in existing functional programming languages. In this paper we design recursive and lexicographic path orders, which can efficiently prove the termination of STRSs. Moreover we discuss an application to the dependency pair and the argument filtering methods, which are very effective and efficient support methods for proving termination.

  • Optimization for the Algebraic Method and Its Application to an Attack of MISTY1

    Yasuo HATANO  Hidema TANAKA  Toshinobu KANEKO  

     
    PAPER-Symmetric Cipher

      Vol:
    E87-A No:1
      Page(s):
    18-27

    In this paper, we describe a technique for optimizing the algebraic method that is applied to higher order differential attack. The higher order differential attack is a well-known attack on block ciphers, in which we derive an attack equation to determine a round key from a property of a higher order differential of a target block cipher. The algebraic method is a linearization of the attack equation and determines the true key by a method such as Gaussian elimination. Our technique is based on linear dependency and can reduce the complexity of that method. We also describe a technique that allows the algebraic method to be used as an attack equation that holds probabilistically. We demonstrate this method by attacking a five-round MISTY1 and show that it needs 221.6 chosen plaintexts and 228.0 encryption times. The computer simulation took about two minutes to complete.

  • A New Probabilistic Dependency Parsing Model for Head-Final, Free Word Order Languages

    Hoojung CHUNG  Hae-Chang RIM  

     
    LETTER-Natural Language Processing

      Vol:
    E86-D No:11
      Page(s):
    2490-2493

    We propose a dependency parsing model for head-final, variable word order languages. Based on the observation that each word has its own preference for its modifying distance and the preferred distance varies according to surrounding context of the word, we define a parsing model that can reflect the preference. The experimental result shows that the parser based on our model outperforms other parsers in terms of precision and recall rate.

  • Fuzzy Relational Database Induced by Conditional Probability Relations

    Rolly INTAN  Masao MUKAIDONO  

     
    PAPER-Welfare Engineering

      Vol:
    E86-D No:8
      Page(s):
    1396-1405

    In 1982, Buckles and Petry proposed fuzzy relational database for incorporating non-ideal or fuzzy information in a relational database. The fuzzy relational database relies on the specification of similarity relation in order to distinguish each scalar domain in the fuzzy database. These relations are reflexive, symmetric, and max-min transitive. In 1989, Shenoi and Melton extended the fuzzy relational database model of Buckles and Petry to deal with proximity relation for scalar domain. Since reflexivity and symmetry are the only constraints placed on proximity relations, proximity relation is considered as a generalization of similarity relation. However, we realized that naturally relation between fuzzy information is not symmetric. Here, we consider using conditional probability relation to represent similarity between two fuzzy data. Related to the properties of conditional probability relation, we introduce an interesting mathematical relation, called weak similarity relation, as generalization of similarity relation as well as proximity relation in which conditional probability relation is regarded as a concrete example of the weak similarity relation. In this paper, we propose design of fuzzy relational database to deal with conditional probability relation for scalar domain. These relations are reflexive and not symmetric. In addition, we define a notion of asymmetric redundant tuple based on two interpretations generalizing the concept of redundancy in classical relational database. In the relation to data querying, we discuss partitioning of domains with the objective of developing similarity class. Finally, we propose a new definition of partial fuzzy functional dependency (PFFD). Fuzzy functional dependency (FFD) as an extension of functional dependency (FD), usually used in design of fuzzy relational database, can be generated by the PFFD. Inference rules that are similar to Armstrong's Axioms for the FFD are both sound and complete.

  • An Extension of the Dependency Pair Method for Proving Termination of Higher-Order Rewrite Systems

    Masahiko SAKAI  Yoshitsugu WATANABE  Toshiki SAKABE  

     
    PAPER-Theory/Models of Computation

      Vol:
    E84-D No:8
      Page(s):
    1025-1032

    This paper explores how to extend the dependency pair technique for proving termination of higher-order rewrite systems. We show that the termination property of higher-order rewrite systems can be checked by the non-existence of an infinite R-chain, which is an extension of Arts' and Giesl's result for the first-order case. It is clarified that the subterm property of the quasi-ordering, used for proving termination automatically, is indispensable.

  • On Proving AC-Termination by AC-Dependency Pairs

    Keiichirou KUSAKARI  Yoshihito TOYAMA  

     
    PAPER-Theory/Models of Computation

      Vol:
    E84-D No:5
      Page(s):
    604-612

    Arts and Giesl introduced the notion of dependency pairs, which gives effective methods for proving termination of term rewriting systems (TRSs). In this paper, we extend the notion to AC-TRSs, and introduce new methods for effectively proving AC-termination. It is impossible to directly apply the notion of dependency pairs to AC-TRSs. To avoid this difficulty, we introduce the notion of extended dependency pairs. Finally we define the AC-dependency pair and the AC-dependency chain. Furthermore, we propose approximated AC-dependency graphs, which is very useful for proving AC-termination in practice, using the approximation technique based on Ω-reduction and ΩV-reduction.

  • Stochastic Evaluation of Acoustic Environment with Noise Cancellation under Introduction of Hierarchically Functional Type Probability Model

    Yoshifumi FUJITA  Mitsuo OHTA  

     
    PAPER-Noise Cancellation for Acoustic System

      Vol:
    E84-A No:2
      Page(s):
    467-474

    For evaluating the output response fluctuation of the actual environmental acoustic system excited by arbitrary random inputs, it is important to predict a whole probability distribution form closely connected with many noise evaluation indexes Lx, Leq and so on. In this paper, a new type evaluation method is proposed by introducing lower and higher order type functional models matched to the prediction of the response probability distribution form especially from a problem-oriented viewpoint. Because of the non-negative property of the sound intensity variable, the response probability density function can be reasonably expressed in advance theoretically by a statistical Laguerre expansion series form. The system characteristic between input and output can be described by the regression relationship between the distribution parameters (containing expansion coefficients of this expression) and the stochastic input. These regression functions can be expressed in terms of the orthogonal series expansion. Since, in the actual environment, the observed output is inevitably contaminated by the background noise, the above regression functions can not be directly employed as the models for the actual environment. Fortunately, the observed output can be given by the sum of the system output and the background noise on the basis of additivity of intensity quantity and the statistical moments of the background noise can be obtained in advance. So, the models relating the regression functions to the function of the observed output can be derived. Next, the parameters of the regression functions are determined based on the least-squares error criteria and the measure of statistical independency according to the level of non-Gaussian property of the function of the observed output. Thus, by using the regression functions obtained by the proposed identification method, the probability distribution of the output reducing the background noise can be predicted. Finally, the effectiveness of the proposed method is confirmed experimentally too by applying it to an actual indoor-outdoor acoustic system.

  • Impact of Packet Spacing Time on Packet Loss under Loss Window Size for FEC-Based Applications

    Teruko MIYATA  Harumoto FUKUDA  Satoshi ONO  

     
    PAPER

      Vol:
    E82-D No:4
      Page(s):
    846-853

    Certain relationships between successive packet loss patterns and packet spacing are described. To observe a successive packet loss pattern, one possible method is to investigate test packets that are generated at certain intervals (e. g. , Poisson interval, constant regular interval). However, successive packet loss strongly depends on the interval generated. If test packets are generated with long intervals, then successive loss pattern cannot be shown. Thus, in such a method, where the packet intervals may sometimes be long or short, the successiveness of the packet loss should be considered in terms of the packet spacing. To clarify the relationship between the successive packet loss and the packet spacing, we analyze data based on observation of an actual network with the loss window size as a parameter. We find that when the packet spacing is narrower, i. e. , has a shorter interval, the probability becomes higher that the packet immediately following a single packet loss would also be lost.

  • Key-Dependency of Linear Probability of RC5

    Shiho MORIAI  Kazumaro AOKI  Kazuo OHTA  

     
    PAPER

      Vol:
    E80-A No:1
      Page(s):
    9-18

    In estimating the vulnerability of a block cipher to differential cryptanalysis and linear cryptanalysis, we must consider the fact that the differential probability and the linear probability vary with the key. In the case of cryptosystems where the round key is XORed to the input data of each round, the difference in both types of probability with different keys is regarded as negligible. However, this is not the case with RC5. This paper makes a primary analysis of the key-dependency of linear probability of RC5. Throughout this paper we study "precise" linear probability. We find some linear approximations that have higher deviation (bias) for some keys than the "best linear approximation" claimed by Kaliski and Yin in CRYPTO'95. Using one linear approximation, we find 10 weak keys of RC5-4/2/2 with linear probability 2-1, 2 weak keys of RC5-4/5/16 with linear probability 2-2, and a weak key of RC5-16/5/16 with linear probability 2-15.4, while Kaliski-Yin's "best biases" are 2-3, 2-9, and 2-17, respectively.

  • Content-Based Video Indexing and Retrieval-- A Natural Language Approach--

    Yeun-Bae KIM  Masahiro SHIBATA  

     
    PAPER

      Vol:
    E79-D No:6
      Page(s):
    695-705

    This paper describes methods in which natural language is used to describe video contents, knowledge of which is needed for intelligent video manipulation. The content encoded by natural language is extracted by a language analyzer in the form of subject-centered dependency structures which is a language-oriented structure, and is combined in an incremental way into a single structure called a multi-path index tree. Content descriptors and their inter-relations are extracted from the index tree in order to provide a high speed retrieval and flexibility. The content-based video index is represented in a two-dimensional structure where in the descriptors are mapped onto a component axis and temporal references (i.e., video segments aligned to the descriptors) are mapped onto a time axis. We implemented an experimental image retrieval systems to illustrate the proposed index structure 1) has superior retrieval capabilities compare to those used in conventional methods, 2) can be generated by an automated procedure, and 3) has a compact and flexible structure that is easily expandable, making an integration with vision processing possible.

  • Continuous Speech Recognition Using a Combination of Syntactic Constraints and Dependency Relationships

    Tsuyoshi MORIMOTO  

     
    PAPER-Speech Processing and Acoustics

      Vol:
    E79-D No:1
      Page(s):
    54-62

    This paper proposes a Japanese continuous speech recognition mechanism in which a full-sentence-level context-free-grammar (CFG) and one kind of semantic constraint called dependency relationships between two bunsetsu (a kind of phrase) in Japanese" are used during speech recognition in an integrated way. Each dependency relationship is a modification relationship between two bunsetsu; these relationships include the case-frame relationship of a noun bunsetsu to a predicate bunsetsu, or adnominal modification relationships such as a noun bunsetsu to a noun bunsetsu. To suppress the processing overhead caused by using relationships of this type during speech recognition, no rigorous semantic analysis is performed. Instead, a simple matching with examples" approach is adopted. An experiment was carried out and results were compared with a case employing only CFG constraints. They show that the speech recognition accuracy is improved and that the overhead is small enough.

  • Implication Problems for Specialization Constraints on Databases Supporting Complex Objects

    Minoru ITO  Michio NAKANISHI  

     
    PAPER-Algorithms, Data Structures and Computational Complexity

      Vol:
    E77-A No:9
      Page(s):
    1510-1519

    For a complex object model, a form of range restriction called specialization constraint (SC), has been proposed, which is associated not only with the properties themselves but also with property value paths. The domain and range of an SC, however, were limited to single classes. In this paper, SCs are generalized to have sets of classes as their domains and ranges. Let Σ be a set of SCs, where each SC in Σ has a set of classes as its domain and a non-empty set of classes as its range. It is proved that an SC is a logical consequence of Σ if and only if it is a finite logical consequence of Σ. Then a sound and complete axiomatization for SCs is presented. Finally, a polynomial-time algorithm is given, which decides whether or not an SC is a logical consequence of Σ.

  • A Preferential Constraint Satisfaction Technique for Natural Language Analysis

    Katashi NAGAO  

     
    PAPER

      Vol:
    E77-D No:2
      Page(s):
    161-170

    In this paper, we present a new technique for the semantic analysis of sentences, including an ambiguity-packing method that generates a packed representation of individual syntactic and semantic structures. This representation is based on a dependency structure with constraints that must be satisfied in the syntax-semantics mapping phase. Complete syntax-semantics mapping is not performed until all ambiguities have been resolved, thus avoiding the combinatorial explosions that sometimes occur when unpacking locally packed ambiguities. A constraint satisfaction technique makes it possible to resolve ambiguities efficiently without unpacking. Disambiguation is the process of applying syntactic and semantic constraints to the possible candidate solutions (such as modifiees, cases, and wordsenses) and removing unsatisfactory condidates. Since several candidates often remain after applying constraints, another kind of knowledge to enable selection of the most plausible candidate solution is required. We call this new knowledge a preference. Both constraints and preferences must be applied to coordination for disambiguation. Either of them alone is insufficient for the purpose, and the interactions between them are important. We also present an algorithm for controlling the interaction between the constraints and the preferences in the disambiguation process. By allowing the preferences to control the application of the constraints, ambiguities can be efficiently resolved, thus avoiding combinatorial explosions.

41-56hit(56hit)