1-4hit |
Prachya BOONKWAN Thepchai SUPNITHI
This paper presents a syntax-based framework for gap resolution in analytic languages. CCG, reputable for dealing with deletion under coordination, is extended with a memory mechanism similar to the slot-and-filler mechanism, resulting in a wider coverage of syntactic gaps patterns. Though our grammar formalism is more expressive than the canonical CCG, its generative power is bounded by Partially Linear Indexed Grammar. Despite the spurious ambiguity originated from the memory mechanism, we also show that its probabilistic parsing is feasible by using the dual decomposition algorithm.
Yoshihide KATO Shigeki MATSUBARA
This paper describes an incremental parser based on an adjoining operation. By using the operation, we can avoid the problem of infinite local ambiguity. This paper further proposes a restricted version of the adjoining operation, which preserves lexical dependencies of partial parse trees. Our experimental results showed that the restriction enhances the accuracy of the incremental parsing.
So-Young PARK Yong-Jae KWAK Joon-Ho LIM Hae-Chang RIM
In this paper, we propose a probabilistic feature-based parsing model for head-final languages, which can lead to an improvement of syntactic disambiguation while reducing the parsing cost related to lexical information. For effective syntactic disambiguation, the proposed parsing model utilizes several useful features such as a syntactic label feature, a content feature, a functional feature, and a size feature. Moreover, it is designed to be suitable for representing word order variation of non-head words in head-final languages. Experimental results show that the proposed parsing model performs better than previous lexicalized parsing models, although it has much less dependence on lexical information.
Yong-Jae KWAK So-Young PARK Joon-Ho LIM Hae-Chang RIM
In this paper, we propose a naïve probabilistic shift-reduce parsing model which can use contextual information more flexibly than the previous probabilistic GLR parsing models, and utilize the characteristics of agglutinative language in which the functional words are highly developed. Experimental results on Korean have shown that our model using the proposed contextual information improves the parsing accuracy more effectively than the previous models. Moreover, it is compact in model size, and is robust with a small training set.