Using short dependency relations from auto-parsed data for Chinese dependency parsing

Wenliang Chen, Daisuke Kawahara, Kiyotaka Uchimoto, Yujie Zhang, Hitoshi Isahara

Research output: Contribution to journalArticlepeer-review

6 Citations (Scopus)


Dependency parsing has become increasingly popular for a surge of interest lately for applications such as machine translation and question answering. Currently, several supervised learning methods can be used for training high-performance dependency parsers if sufficient labeled data are available. However, currently used statistical dependency parsers provide poor results for words separated by long distances. In order to solve this problem, this article presents an effective dependency parsing approach of incorporating short dependency information from unlabeled data. The unlabeled data is automatically parsed by using a deterministic dependency parser, which exhibits a relatively high performance for short dependencies between words. We then train another parser that uses the information on short dependency relations extracted from the output of the first parser. The proposed approach achieves an unlabeled attachment score of 86.52%, an absolute 1.24% improvement over the baseline system on the Chinese Treebank data set. The results indicate that the proposed approach improves the parsing performance for longer distance words.

Original languageEnglish
Article number10
JournalACM Transactions on Asian Language Information Processing
Issue number3
Publication statusPublished - Aug 1 2009
Externally publishedYes

All Science Journal Classification (ASJC) codes

  • General Computer Science


Dive into the research topics of 'Using short dependency relations from auto-parsed data for Chinese dependency parsing'. Together they form a unique fingerprint.

Cite this