Yi Li


pdf bib
Large Margin Neural Language Model
Jiaji Huang | Yi Li | Wei Ping | Liang Huang
Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing

We propose a large margin criterion for training neural language models. Conventionally, neural language models are trained by minimizing perplexity (PPL) on grammatical sentences. However, we demonstrate that PPL may not be the best metric to optimize in some tasks, and further propose a large margin formulation. The proposed method aims to enlarge the margin between the “good” and “bad” sentences in a task-specific sense. It is trained end-to-end and can be widely applied to tasks that involve re-scoring of generated text. Compared with minimum-PPL training, our method gains up to 1.1 WER reduction for speech recognition and 1.0 BLEU increase for machine translation.


pdf bib
Is This Post Persuasive? Ranking Argumentative Comments in Online Forum
Zhongyu Wei | Yang Liu | Yi Li
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)

pdf bib
A Preliminary Study of Disputation Behavior in Online Debating Forum
Zhongyu Wei | Yandi Xia | Chen Li | Yang Liu | Zachary Stallbohm | Yi Li | Yang Jin
Proceedings of the Third Workshop on Argument Mining (ArgMining2016)


pdf bib
Deploying MT into a Localisation Workflow: Pains and Gains
Yanli Sun | Juan Liu | Yi Li
Proceedings of Machine Translation Summit XIII: Papers


pdf bib
Exploring Abbreviation Expansion for Genomic Information Retrieval
Nicola Stokes | Yi Li | Lawrence Cavedon | Justin Zobel
Proceedings of the Australasian Language Technology Workshop 2007