Libin Shen


2022

pdf bib
Lite Unified Modeling for Discriminative Reading Comprehension
Yilin Zhao | Hai Zhao | Libin Shen | Yinggong Zhao
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

As a broad and major category in machine reading comprehension (MRC), the generalized goal of discriminative MRC is answer prediction from the given materials. However, the focuses of various discriminative MRC tasks may be diverse enough: multi-choice MRC requires model to highlight and integrate all potential critical evidence globally; while extractive MRC focuses on higher local boundary preciseness for answer extraction. Among previous works, there lacks a unified design with pertinence for the overall discriminative MRC tasks. To fill in above gap, we propose a lightweight POS-Enhanced Iterative Co-Attention Network (POI-Net) as the first attempt of unified modeling with pertinence, to handle diverse discriminative MRC tasks synchronously. Nearly without introducing more parameters, our lite unified design brings model significant improvement with both encoder and decoder components. The evaluation results on four discriminative MRC benchmarks consistently indicate the general effectiveness and applicability of our model, and the code is available at https://github.com/Yilin1111/poi-net.

2021

pdf bib
What If Sentence-hood is Hard to Define: A Case Study in Chinese Reading Comprehension
Jiawei Wang | Hai Zhao | Yinggong Zhao | Libin Shen
Findings of the Association for Computational Linguistics: EMNLP 2021

Machine reading comprehension (MRC) is a challenging NLP task for it requires to carefully deal with all linguistic granularities from word, sentence to passage. For extractive MRC, the answer span has been shown mostly determined by key evidence linguistic units, in which it is a sentence in most cases. However, we recently discovered that sentences may not be clearly defined in many languages to different extents, so that this causes so-called location unit ambiguity problem and as a result makes it difficult for the model to determine which sentence exactly contains the answer span when sentence itself has not been clearly defined at all. Taking Chinese language as a case study, we explain and analyze such a linguistic phenomenon and correspondingly propose a reader with Explicit Span-Sentence Predication to alleviate such a problem. Our proposed reader eventually helps achieve a new state-of-the-art on Chinese MRC benchmark and shows great potential in dealing with other languages.

2020

pdf bib
Cold-Start and Interpretability: Turning Regular Expressions into Trainable Recurrent Neural Networks
Chengyue Jiang | Yinggong Zhao | Shanbo Chu | Libin Shen | Kewei Tu
Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)

Neural networks can achieve impressive performance on many natural language processing applications, but they typically need large labeled data for training and are not easily interpretable. On the other hand, symbolic rules such as regular expressions are interpretable, require no training, and often achieve decent accuracy; but rules cannot benefit from labeled data when available and hence underperform neural networks in rich-resource scenarios. In this paper, we propose a type of recurrent neural networks called FA-RNNs that combine the advantages of neural networks and regular expression rules. An FA-RNN can be converted from regular expressions and deployed in zero-shot and cold-start scenarios. It can also utilize labeled data for training to achieve improved prediction accuracy. After training, an FA-RNN often remains interpretable and can be converted back into regular expressions. We apply FA-RNNs to text classification and observe that FA-RNNs significantly outperform previous neural approaches in both zero-shot and low-resource settings and remain very competitive in rich-resource settings.

pdf bib
Learning Numeral Embedding
Chengyue Jiang | Zhonglin Nian | Kaihao Guo | Shanbo Chu | Yinggong Zhao | Libin Shen | Kewei Tu
Findings of the Association for Computational Linguistics: EMNLP 2020

Word embedding is an essential building block for deep learning methods for natural language processing. Although word embedding has been extensively studied over the years, the problem of how to effectively embed numerals, a special subset of words, is still underexplored. Existing word embedding methods do not learn numeral embeddings well because there are an infinite number of numerals and their individual appearances in training corpora are highly scarce. In this paper, we propose two novel numeral embedding methods that can handle the out-of-vocabulary (OOV) problem for numerals. We first induce a finite set of prototype numerals using either a self-organizing map or a Gaussian mixture model. We then represent the embedding of a numeral as a weighted average of the prototype number embeddings. Numeral embeddings represented in this manner can be plugged into existing word embedding learning approaches such as skip-gram for training. We evaluated our methods and showed its effectiveness on four intrinsic and extrinsic tasks: word similarity, embedding numeracy, numeral prediction, and sequence labeling.

2013

pdf bib
What is Hidden among Translation Rules
Libin Shen | Bowen Zhou
Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing

pdf bib
Two-Neighbor Orientation Model with Cross-Boundary Global Contexts
Hendra Setiawan | Bowen Zhou | Bing Xiang | Libin Shen
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

2010

pdf bib
String-to-Dependency Statistical Machine Translation
Libin Shen | Jinxi Xu | Ralph Weischedel
Computational Linguistics, Volume 36, Issue 4 - December 2010

pdf bib
Statistical Machine Translation with a Factorized Grammar
Libin Shen | Bing Zhang | Spyros Matsoukas | Jinxi Xu | Ralph Weischedel
Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing

2009

pdf bib
Effective Use of Linguistic and Contextual Information for Statistical Machine Translation
Libin Shen | Jinxi Xu | Bing Zhang | Spyros Matsoukas | Ralph Weischedel
Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing

2008

pdf bib
LTAG Dependency Parsing with Bidirectional Incremental Construction
Libin Shen | Aravind Joshi
Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing

pdf bib
A New String-to-Dependency Machine Translation Algorithm with a Target Dependency Language Model
Libin Shen | Jinxi Xu | Ralph Weischedel
Proceedings of ACL-08: HLT

2007

pdf bib
Guided Learning for Bidirectional Sequence Classification
Libin Shen | Giorgio Satta | Aravind Joshi
Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics

2006

pdf bib
Issues in Synchronizing the English Treebank and PropBank
Olga Babko-Malaya | Ann Bies | Ann Taylor | Szuting Yi | Martha Palmer | Mitch Marcus | Seth Kulick | Libin Shen
Proceedings of the Workshop on Frontiers in Linguistically Annotated Corpora 2006

2005

pdf bib
Incremental LTAG Parsing
Libin Shen | Aravind Joshi
Proceedings of Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing

2004

pdf bib
A Smorgasbord of Features for Statistical Machine Translation
Franz Josef Och | Daniel Gildea | Sanjeev Khudanpur | Anoop Sarkar | Kenji Yamada | Alex Fraser | Shankar Kumar | Libin Shen | David Smith | Katherine Eng | Viren Jain | Zhen Jin | Dragomir Radev
Proceedings of the Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics: HLT-NAACL 2004

pdf bib
Discriminative Reranking for Machine Translation
Libin Shen | Anoop Sarkar | Franz Josef Och
Proceedings of the Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics: HLT-NAACL 2004

pdf bib
Nondeterministic LTAG Derivation Tree Extraction
Libin Shen
Proceedings of the 7th International Workshop on Tree Adjoining Grammar and Related Formalisms

2003

pdf bib
An SVM-based voting algorithm with application to parse reranking
Libin Shen | Aravind K. Joshi
Proceedings of the Seventh Conference on Natural Language Learning at HLT-NAACL 2003

pdf bib
Using LTAG Based Features in Parse Reranking
Libin Shen | Anoop Sarkar | Aravind Joshi
Proceedings of the 2003 Conference on Empirical Methods in Natural Language Processing

pdf bib
Chinese Word Segmentation as LMR Tagging
Nianwen Xue | Libin Shen
Proceedings of the Second SIGHAN Workshop on Chinese Language Processing

pdf bib
A SNoW Based Supertagger with Application to NP Chunking
Libin Shen | Aravind K. Joshi
Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics