Hiroaki AKUTSU Ko ARAI
Lanxi LIU Pengpeng YANG Suwen DU Sani M. ABDULLAHI
Xiaoguang TU Zhi HE Gui FU Jianhua LIU Mian ZHONG Chao ZHOU Xia LEI Juhang YIN Yi HUANG Yu WANG
Yingying LU Cheng LU Yuan ZONG Feng ZHOU Chuangao TANG
Jialong LI Takuto YAMAUCHI Takanori HIRANO Jinyu CAI Kenji TEI
Wei LEI Yue ZHANG Hanfeng XIE Zebin CHEN Zengping CHEN Weixing LI
David CLARINO Naoya ASADA Atsushi MATSUO Shigeru YAMASHITA
Takashi YOKOTA Kanemitsu OOTSU
Xiaokang Jin Benben Huang Hao Sheng Yao Wu
Tomoki MIYAMOTO
Ken WATANABE Katsuhide FUJITA
Masashi UNOKI Kai LI Anuwat CHAIWONGYEN Quoc-Huy NGUYEN Khalid ZAMAN
Takaharu TSUBOYAMA Ryota TAKAHASHI Motoi IWATA Koichi KISE
Chi ZHANG Li TAO Toshihiko YAMASAKI
Ann Jelyn TIEMPO Yong-Jin JEONG
Haruhisa KATO Yoshitaka KIDANI Kei KAWAMURA
Jiakun LI Jiajian LI Yanjun SHI Hui LIAN Haifan WU
Gyuyeong KIM
Hyun KWON Jun LEE
Fan LI Enze YANG Chao LI Shuoyan LIU Haodong WANG
Guangjin Ouyang Yong Guo Yu Lu Fang He
Yuyao LIU Qingyong LI Shi BAO Wen WANG
Cong PANG Ye NI Jia Ming CHENG Lin ZHOU Li ZHAO
Nikolay FEDOROV Yuta YAMASAKI Masateru TSUNODA Akito MONDEN Amjed TAHIR Kwabena Ebo BENNIN Koji TODA Keitaro NAKASAI
Yukasa MURAKAMI Yuta YAMASAKI Masateru TSUNODA Akito MONDEN Amjed TAHIR Kwabena Ebo BENNIN Koji TODA Keitaro NAKASAI
Kazuya KAKIZAKI Kazuto FUKUCHI Jun SAKUMA
Yitong WANG Htoo Htoo Sandi KYAW Kunihiro FUJIYOSHI Keiichi KANEKO
Waqas NAWAZ Muhammad UZAIR Kifayat ULLAH KHAN Iram FATIMA
Haeyoung Lee
Ji XI Pengxu JIANG Yue XIE Wei JIANG Hao DING
Weiwei JING Zhonghua LI
Sena LEE Chaeyoung KIM Hoorin PARK
Akira ITO Yoshiaki TAKAHASHI
Rindo NAKANISHI Yoshiaki TAKATA Hiroyuki SEKI
Chuzo IWAMOTO Ryo TAKAISHI
Chih-Ping Wang Duen-Ren Liu
Yuya TAKADA Rikuto MOCHIDA Miya NAKAJIMA Syun-suke KADOYA Daisuke SANO Tsuyoshi KATO
Yi Huo Yun Ge
Rikuto MOCHIDA Miya NAKAJIMA Haruki ONO Takahiro ANDO Tsuyoshi KATO
Koichi FUJII Tomomi MATSUI
Yaotong SONG Zhipeng LIU Zhiming ZHANG Jun TANG Zhenyu LEI Shangce GAO
Souhei TAKAGI Takuya KOJIMA Hideharu AMANO Morihiro KUGA Masahiro IIDA
Jun ZHOU Masaaki KONDO
Tetsuya MANABE Wataru UNUMA
Kazuyuki AMANO
Takumi SHIOTA Tonan KAMATA Ryuhei UEHARA
Hitoshi MURAKAMI Yutaro YAMAGUCHI
Jingjing Liu Chuanyang Liu Yiquan Wu Zuo Sun
Zhenglong YANG Weihao DENG Guozhong WANG Tao FAN Yixi LUO
Yoshiaki TAKATA Akira ONISHI Ryoma SENDA Hiroyuki SEKI
Dinesh DAULTANI Masayuki TANAKA Masatoshi OKUTOMI Kazuki ENDO
Kento KIMURA Tomohiro HARAMIISHI Kazuyuki AMANO Shin-ichi NAKANO
Ryotaro MITSUBOSHI Kohei HATANO Eiji TAKIMOTO
Genta INOUE Daiki OKONOGI Satoru JIMBO Thiem Van CHU Masato MOTOMURA Kazushi KAWAMURA
Hikaru USAMI Yusuke KAMEDA
Yinan YANG
Takumi INABA Takatsugu ONO Koji INOUE Satoshi KAWAKAMI
Fengshan ZHAO Qin LIU Takeshi IKENAGA
Naohito MATSUMOTO Kazuhiro KURITA Masashi KIYOMI
Tomohiro KOBAYASHI Tomomi MATSUI
Shin-ichi NAKANO
Ming PAN
In this paper, we present a new technique for the semantic analysis of sentences, including an ambiguity-packing method that generates a packed representation of individual syntactic and semantic structures. This representation is based on a dependency structure with constraints that must be satisfied in the syntax-semantics mapping phase. Complete syntax-semantics mapping is not performed until all ambiguities have been resolved, thus avoiding the combinatorial explosions that sometimes occur when unpacking locally packed ambiguities. A constraint satisfaction technique makes it possible to resolve ambiguities efficiently without unpacking. Disambiguation is the process of applying syntactic and semantic constraints to the possible candidate solutions (such as modifiees, cases, and wordsenses) and removing unsatisfactory condidates. Since several candidates often remain after applying constraints, another kind of knowledge to enable selection of the most plausible candidate solution is required. We call this new knowledge a preference. Both constraints and preferences must be applied to coordination for disambiguation. Either of them alone is insufficient for the purpose, and the interactions between them are important. We also present an algorithm for controlling the interaction between the constraints and the preferences in the disambiguation process. By allowing the preferences to control the application of the constraints, ambiguities can be efficiently resolved, thus avoiding combinatorial explosions.
This paper introduces a constraint logic programming (CLP) language cu-Prolog as an implementation framework for constraint-based natural language processing. Compared to other CLP languages, cu-Prolog has several unique features. Most CLP languages take algebraic equations or inequations as constraints. cu-Prolog, on the other hand, takes Prolog atomic formulas in terms of user-defined predicates. cu-Prolog, thus, can describe symbolic and combinatorial constraints occurring in the constraint-based grammar formalisms. As a constraint solver, cu-Prolog uses the unfold/fold transformation, which is well known as a program transformation technique, dynamically with some heuristics. To treat the information partiality described with feature structures, cu-Prolog uses PST (Partially Specified Term) as its data structure. Sections 1 and 2 give an introduction to the constraint-based grammar formalisms on which this paper is based and the outline of cu-Prolog is explained in Sect. 3 with implementation issues described in Sect. 4. Section 5 illustrates its linguistic application to disjunctive feature structure (DFS) and parsing constraint-based grammar formalisms such as Japanese Phrase Structure Grammar (JPSG). In either application, a disambiguation process is realized by transforming constraints, which gives a picture of constraint-based NLP.
Kyoko KAI Yuko DEN Yasuharu DEN Mika OBA Jun-ichi NAKAMURA Sho YOSHIDA
Naturalness of expressions reflects various pragmatic factors in addition to grammatical factors. In this paper, we discuss relations between expressions and two pragmatic factors: a point fo view of speaker and a hierarchical relation among participants. Degree of
Toru SUGIMOTO Akinori YONEZAWA
As a general basis for constructing a cooperative and flexible dialogue system, we are interested in modelling the inference process of an agent who participates in a dialogue. For this purpose, it is natural and powerful to model it in his general cognitive framework for problem solving. This paper presents such a framework. In this framework, we represent agent's mental states in the form called Mental World Structure, which consists of multiple mental worlds. Each mental world is a set of mental propositions and corresponds to one modal context, that is, a specific point of view. Modalities in an agent's mental states are represented by path expressions, which are first class citizens of the system and can be composed each other to make up composite modalities. With Mental World Structure, we can handle modalities more flexibly than ordinary modal logics, situation theory and other representation systems. We incorporate smoothly into the structure three basic inference procedures, that is, deduction, abduction and truth maintenance. Precise definitions of the structure and the inference procedures are given. Furthermore, we explain as examples, several cooperative dialogues in our framework.
In this paper, we present a unified model for dialogue understanding involving various sorts of ambiguities, such as lexical, syntactic, semantic, and plan ambiguities. This model is able to estimate and revise the most preferable interpretation of utterances as a dialogue progresses. The model's features successfully capture the dynamic nature of dialogue management. The model consists of two main portions: (1) an extension of first-order logic for maintaining multiple interpretations of ambiguous utterances in a dialogue; (2) a device which estimates and revises the most preferable interpretation from among these multiple interpretations. Since the model is logic-based, it provides a good basis for formulating a rational justification of its current interpretation, which is one of the most desirable aspects in generating helpful responses. These features (contained in our model) are extremely useful for interactive dialogue management.
Hozumi TANAKA K.G. SURESH Koichi YAMADA
A family of new generalized LR parsing algorithms are proposed which make use of a set of ancestors tables introduced by Kipps. As Kipps's algorithm does not give us a method to extract any parsing results, his algorithm is not considered as a practical parser but as a recognizer. In this paper, we will propose two methods to extract all parse trees from a set of ancestors tables in the top vertices of a graph-structured stack. For an input sentence of length n, while the time complexity of the Tomita parser can exceed O(n3) for some context-free grammars (CFGs), the time complexity of our parser is O(n3) for any CFGs, since our algorithm is based on the Kipps's recognizer. In order to extract a parse tree from a set of ancestors tables, it takes time in order n2. Some preliminary experimental results are given to show the efficiency of our parsers over Tomita parser.
A case structure expression is one of the most important forms to represent the meaning of the sentence. Case structure analysis is usually performed by consulting case frame information in a verb dictionary. However, this analysis is very difficult because of several problems, such as word sense ambiguity and structural ambiguity. A conventional method for solving these problems is to use the method of selectional restriction, but this method has a drawback in the semantic marker (SM) method --the trade-off between descriptive power and construction cost. In this paper, we propose a method of case structure analysis based on examples in case frame dictionary This method uses the case frame dictionary which has some typical example sentences for each case frame, and it selects a proper case frame for an input sentence by matching the input sentence with the examples in the case frame dictionary. The best matching score, which is utilized for selecting a proper case frame for a predicate, can be considered as the score for the case structure of the predicate. Therefore, when there are two or more readings for a sentence because of structural ambiguity, the best reading of a sentence can be selected by evaluating the sum of the scores for the case structures of all predicates in a sentence. We report on experiments which shows that this method is superior to the conventional, coarse-grained SM method, and also describe the superiority of the example-based method over the SM method.
This paper presents a new method for resolving lexical (word sense) ambiguities inherent in natural language sentences. The Sentence Analyzer (SENA) was developed to resolve such ambiguities by using constraints and example-based preferences. The ambiguities are packed into a single dependency structure, and grammatical and lexical constraints are applied to it in order to reduce the degree of ambiguity. The application of constraints is realized by a very effective constraint-satisfaction technique. Remaining ambiguities are resolved by the use of preferences calculated from an example-base, which is a set of fully parsed word-to-word dependencies acquired semi-automatically from on-line dictionaries.
Hideo WATANABE Hiroshi MARUYAMA
This paper proposes a new type of transfer system, called Similarity-driven Transfer System (or SimTran), which uses an example-based approach to the transfer phase of MT. In this paper, we describe a method for calculating similarity, a method for searching the most appropriate set of translation rules, and a method for constructing an output structure from those selected rules. Further, we show that SimTran can use not only translation examples but also syntax-based translation rules used in conventional transfer systems at the same time.
Kenji KITA Tsuyoshi MORIMOTO Kazumi OHKURA Shigeki SAGAYAMA Yaneo YANO
This paper describes Japanese spoken sentence recognition using hybrid language modeling, which combines the advantages of both syntactic and stochastic language models. As the baseline system, we adopted the HMM-LR speech recognition system, with which we have already achieved good performance for Japanese phrase recognition tasks. Several improvements have been made to this system aimed at handling continuously spoken sentences. The first improvement is HMM training with continuous utterances as well as word utterances. In previous implementations, HMMs were trained with only word utterances. Continuous utterances are included in the HMM training data because coarticulation effects are much stronger in continuous utterances. The second improvement is the development of a sentential grammar for Japanese. The sentential grammar was created by combining inter- and intra-phrase CFG grammars, which were developed separately. The third improvement is the incorporation of stochastic linguistic knowledge, which includes stochastic CFG and a bigram model of production rules. The system was evaluated using continuously spoken sentences from a conference registration task that included approximately 750 words. We attained a sentence accuracy of 83.9% in the speaker-dependent condition.