2023
pdf
bib
abs
Hard Sample Aware Prompt-Tuning
Yuanjian Xu
|
Qi An
|
Jiahuan Zhang
|
Peng Li
|
Zaiqing Nie
Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Prompt-tuning based few-shot learning has garnered increasing attention in recent years due to its efficiency and promising capability. To achieve the best performance for NLP tasks with just a few samples, it is vital to include as many informative samples as possible and to avoid misleading ones. However, there is no work in prompt-tuning literature addressing the problem of differentiating informative hard samples from misleading ones in model training, which is challenging due to the lack of supervision signals about the quality of the samples to train a well-performed model. We propose a Hard Sample Aware Prompt-Tuning framework (i.e. HardPT) to solve the non-differentiable problem in hard sample identification with reinforcement learning, and to strengthen the discrimination of the feature space without changing the original data distribution via an adaptive contrastive learning method. An extensive empirical study on a series of NLP tasks demonstrates the capability of HardPT in few-shot scenarios. HardPT obtains new SOTA results on all evaluated NLP tasks, including pushing the SST-5 accuracy to 49.5% (1.1% point absolute improvement), QNLI accuracy to 74.6% (1.9% absolute improvement), NMLI accuracy to 71.5 (0.7% absolute improvement), TACREV F1-score to 28.2 (1.0 absolute improvement), and i2b2/VA F1-score to 41.2 (1.3 absolute improvement).
2020
pdf
bib
abs
Pre-trained Language Model Based Active Learning for Sentence Matching
Guirong Bai
|
Shizhu He
|
Kang Liu
|
Jun Zhao
|
Zaiqing Nie
Proceedings of the 28th International Conference on Computational Linguistics
Active learning is able to significantly reduce the annotation cost for data-driven techniques. However, previous active learning approaches for natural language processing mainly depend on the entropy-based uncertainty criterion, and ignore the characteristics of natural language. In this paper, we propose a pre-trained language model based active learning approach for sentence matching. Differing from previous active learning, it can provide linguistic criteria from the pre-trained language model to measure instances and help select more effective instances for annotation. Experiments demonstrate our approach can achieve greater accuracy with fewer labeled training instances.
2019
pdf
bib
abs
Incorporating Interlocutor-Aware Context into Response Generation on Multi-Party Chatbots
Cao Liu
|
Kang Liu
|
Shizhu He
|
Zaiqing Nie
|
Jun Zhao
Proceedings of the 23rd Conference on Computational Natural Language Learning (CoNLL)
Conventional chatbots focus on two-party response generation, which simplifies the real dialogue scene. In this paper, we strive toward a novel task of Response Generation on Multi-Party Chatbot (RGMPC), where the generated responses heavily rely on the interlocutors’ roles (e.g., speaker and addressee) and their utterances. Unfortunately, complex interactions among the interlocutors’ roles make it challenging to precisely capture conversational contexts and interlocutors’ information. Facing this challenge, we present a response generation model which incorporates Interlocutor-aware Contexts into Recurrent Encoder-Decoder frameworks (ICRED) for RGMPC. Specifically, we employ interactive representations to capture dialogue contexts for different interlocutors. Moreover, we leverage an addressee memory to enhance contextual interlocutor information for the target addressee. Finally, we construct a corpus for RGMPC based on an existing open-access dataset. Automatic and manual evaluations demonstrate that the ICRED remarkably outperforms strong baselines.
pdf
bib
abs
Generating Questions for Knowledge Bases via Incorporating Diversified Contexts and Answer-Aware Loss
Cao Liu
|
Kang Liu
|
Shizhu He
|
Zaiqing Nie
|
Jun Zhao
Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)
We tackle the task of question generation over knowledge bases. Conventional methods for this task neglect two crucial research issues: 1) the given predicate needs to be expressed; 2) the answer to the generated question needs to be definitive. In this paper, we strive toward the above two issues via incorporating diversified contexts and answer-aware loss. Specifically, we propose a neural encoder-decoder model with multi-level copy mechanisms to generate such questions. Furthermore, the answer aware loss is introduced to make generated questions corresponding to more definitive answers. Experiments demonstrate that our model achieves state-of-the-art performance. Meanwhile, such generated question is able to express the given predicate and correspond to a definitive answer.
2016
pdf
bib
Segment-Level Sequence Modeling using Gated Recursive Semi-Markov Conditional Random Fields
Jingwei Zhuo
|
Yong Cao
|
Jun Zhu
|
Bo Zhang
|
Zaiqing Nie
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
2015
pdf
bib
Joint Entity Recognition and Disambiguation
Gang Luo
|
Xiaojiang Huang
|
Chin-Yew Lin
|
Zaiqing Nie
Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing
2010
pdf
bib
Mining Name Translations from Entity Graph Mapping
Gae-won You
|
Seung-won Hwang
|
Young-In Song
|
Long Jiang
|
Zaiqing Nie
Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing
2009
pdf
bib
Anchor Text Extraction for Academic Search
Shuming Shi
|
Fei Xing
|
Mingjie Zhu
|
Zaiqing Nie
|
Ji-Rong Wen
Proceedings of the 2009 Workshop on Text and Citation Analysis for Scholarly Digital Libraries (NLPIR4DL)