The search functionality is under construction.

Author Search Result

[Author] Maoxi LI(3hit)

1-3hit
  • A Global Deep Reranking Model for Semantic Role Classification

    Haitong YANG  Guangyou ZHOU  Tingting HE  Maoxi LI  

     
    LETTER-Natural Language Processing

      Pubricized:
    2021/04/15
      Vol:
    E104-D No:7
      Page(s):
    1063-1066

    The current approaches to semantic role classification usually first define a representation vector for a candidate role and feed the vector into a deep neural network to perform classification. The representation vector contains some lexicalization features like word embeddings, lemmar embeddings. From linguistics, the semantic role frame of a sentence is a joint structure with strong dependencies between arguments which is not considered in current deep SRL systems. Therefore, this paper proposes a global deep reranking model to exploit these strong dependencies. The evaluation experiments on the CoNLL 2009 shared tasks show that our system can outperforms a strong local system significantly that does not consider role dependency relations.

  • Adversarial Domain Adaptation Network for Semantic Role Classification

    Haitong YANG  Guangyou ZHOU  Tingting HE  Maoxi LI  

     
    PAPER-Natural Language Processing

      Pubricized:
    2019/09/02
      Vol:
    E102-D No:12
      Page(s):
    2587-2594

    In this paper, we study domain adaptation of semantic role classification. Most systems utilize the supervised method for semantic role classification. But, these methods often suffer severe performance drops on out-of-domain test data. The reason for the performance drops is that there are giant feature differences between source and target domain. This paper proposes a framework called Adversarial Domain Adaption Network (ADAN) to relieve domain adaption of semantic role classification. The idea behind our method is that the proposed framework can derive domain-invariant features via adversarial learning and narrow down the gap between source and target feature space. To evaluate our method, we conduct experiments on English portion in the CoNLL 2009 shared task. Experimental results show that our method can largely reduce the performance drop on out-of-domain test data.

  • A Unified Neural Network for Quality Estimation of Machine Translation

    Maoxi LI  Qingyu XIANG  Zhiming CHEN  Mingwen WANG  

     
    LETTER-Natural Language Processing

      Pubricized:
    2018/06/18
      Vol:
    E101-D No:9
      Page(s):
    2417-2421

    The-state-of-the-art neural quality estimation (QE) of machine translation model consists of two sub-networks that are tuned separately, a bidirectional recurrent neural network (RNN) encoder-decoder trained for neural machine translation, called the predictor, and an RNN trained for sentence-level QE tasks, called the estimator. We propose to combine the two sub-networks into a whole neural network, called the unified neural network. When training, the bidirectional RNN encoder-decoder are initialized and pre-trained with the bilingual parallel corpus, and then, the networks are trained jointly to minimize the mean absolute error over the QE training samples. Compared with the predictor and estimator approach, the use of a unified neural network helps to train the parameters of the neural networks that are more suitable for the QE task. Experimental results on the benchmark data set of the WMT17 sentence-level QE shared task show that the proposed unified neural network approach consistently outperforms the predictor and estimator approach and significantly outperforms the other baseline QE approaches.