Jian-Yun Nie

Also published as: Jian-yun Nie


pdf bib
Improving Few-Shot Relation Classification by Prototypical Representation Learning with Definition Text
Li Zhenzhen | Yuyang Zhang | Jian-Yun Nie | Dongsheng Li
Findings of the Association for Computational Linguistics: NAACL 2022

Few-shot relation classification is difficult because the few instances available may not represent well the relation patterns. Some existing approaches explored extra information such as relation definition, in addition to the instances, to learn a better relation representation. However, the encoding of the extra information has been performed independently from the labeled instances. In this paper, we propose to learn a prototype encoder from relation definition in a way that is useful for relation instance classification. To this end, we use a joint training approach to train both a prototype encoder from definition and an instance encoder. Extensive experiments on several datasets demonstrate the effectiveness and usefulness of our prototype encoder from definition text, enabling us to outperform state-of-the-art approaches.

pdf bib
UPER: Boosting Multi-Document Summarization with an Unsupervised Prompt-based Extractor
Shangqing Tu | Jifan Yu | Fangwei Zhu | Juanzi Li | Lei Hou | Jian-Yun Nie
Proceedings of the 29th International Conference on Computational Linguistics

Multi-Document Summarization (MDS) commonly employs the 2-stage extract-then-abstract paradigm, which first extracts a relatively short meta-document, then feeds it into the deep neural networks to generate an abstract. Previous work usually takes the ROUGE score as the label for training a scoring model to evaluate source documents. However, the trained scoring model is prone to under-fitting for low-resource settings, as it relies on the training data. To extract documents effectively, we construct prompting templates that invoke the underlying knowledge in Pre-trained Language Model (PLM) to calculate the document and keyword’s perplexity, which can assess the document’s semantic salience. Our unsupervised approach can be applied as a plug-in to boost other metrics for evaluating a document’s salience, thus improving the subsequent abstract generation. We get positive results on 2 MDS datasets, 2 data settings, and 2 abstractive backbone models, showing our method’s effectiveness. Our code is available at https://github.com/THU-KEG/UPER

pdf bib
Learning to Transfer Prompts for Text Generation
Junyi Li | Tianyi Tang | Jian-Yun Nie | Ji-Rong Wen | Xin Zhao
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies

Pretrained language models (PLMs) have made remarkable progress in text generation tasks via fine-tuning. While, it is challenging to fine-tune PLMs in a data-scarce situation. Therefore, it is non-trivial to develop a general and lightweight model that can adapt to various text generation tasks based on PLMs. To fulfill this purpose, the recent prompt-based learning offers a potential solution. In this paper, we improve this technique and propose a novel prompt-based method (PTG) for text generation in a transferable setting. First, PTG learns a set of source prompts for various source generation tasks and then transfers these prompts as target prompts to perform target generation tasks. To consider both task- and instance-level information, we design an adaptive attention mechanism to derive the target prompts. For each data instance, PTG learns a specific target prompt by attending to highly relevant source prompts. In extensive experiments, PTG yields competitive or better results than fine-tuning methods. We release our source prompts as an open resource, where users can add or reuse them to improve new text generation tasks for future research. Code and data can be available at https://github.com/RUCAIBox/Transfer-Prompts-for-Text-Generation.


pdf bib
An Investigation of Suitability of Pre-Trained Language Models for Dialogue Generation – Avoiding Discrepancies
Yan Zeng | Jian-Yun Nie
Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021

pdf bib
Learning Syntactic Dense Embedding with Correlation Graph for Automatic Readability Assessment
Xinying Qiu | Yuan Chen | Hanwu Chen | Jian-Yun Nie | Yuming Shen | Dawei Lu
Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)

Deep learning models for automatic readability assessment generally discard linguistic features traditionally used in machine learning models for the task. We propose to incorporate linguistic features into neural network models by learning syntactic dense embeddings based on linguistic features. To cope with the relationships between the features, we form a correlation graph among features and use it to learn their embeddings so that similar features will be represented by similar embeddings. Experiments with six data sets of two proficiency levels demonstrate that our proposed methodology can complement BERT-only model to achieve significantly better performances for automatic readability assessment.

pdf bib
Inductive Topic Variational Graph Auto-Encoder for Text Classification
Qianqian Xie | Jimin Huang | Pan Du | Min Peng | Jian-Yun Nie
Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies

Graph convolutional networks (GCNs) have been applied recently to text classification and produced an excellent performance. However, existing GCN-based methods do not assume an explicit latent semantic structure of documents, making learned representations less effective and difficult to interpret. They are also transductive in nature, thus cannot handle out-of-graph documents. To address these issues, we propose a novel model named inductive Topic Variational Graph Auto-Encoder (T-VGAE), which incorporates a topic model into variational graph-auto-encoder (VGAE) to capture the hidden semantic information between documents and words. T-VGAE inherits the interpretability of the topic model and the efficient information propagation mechanism of VGAE. It learns probabilistic representations of words and documents by jointly encoding and reconstructing the global word-level graph and bipartite graphs of documents, where each document is considered individually and decoupled from the global correlation graph so as to enable inductive learning. Our experiments on several benchmark datasets show that our method outperforms the existing competitive models on supervised and semi-supervised text classification, as well as unsupervised text representation learning. In addition, it has higher interpretability and is able to deal with unseen documents.

pdf bib
A Simple and Efficient Multi-Task Learning Approach for Conditioned Dialogue Generation
Yan Zeng | Jian-Yun Nie
Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies

Conditioned dialogue generation suffers from the scarcity of labeled responses. In this work, we exploit labeled non-dialogue text data related to the condition, which are much easier to collect. We propose a multi-task learning approach to leverage both labeled dialogue and text data. The 3 tasks jointly optimize the same pre-trained Transformer – conditioned dialogue generation task on the labeled dialogue data, conditioned language encoding task and conditioned language generation task on the labeled text data. Experimental results show that our approach outperforms the state-of-the-art models by leveraging the labeled texts, and it also obtains larger improvement in performance comparing to the previous methods to leverage text data.


pdf bib
ScriptWriter: Narrative-Guided Script Generation
Yutao Zhu | Ruihua Song | Zhicheng Dou | Jian-Yun Nie | Jin Zhou
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics

It is appealing to have a system that generates a story or scripts automatically from a storyline, even though this is still out of our reach. In dialogue systems, it would also be useful to drive dialogues by a dialogue plan. In this paper, we address a key problem involved in these applications - guiding a dialogue by a narrative. The proposed model ScriptWriter selects the best response among the candidates that fit the context as well as the given narrative. It keeps track of what in the narrative has been said and what is to be said. A narrative plays a different role than the context (i.e., previous utterances), which is generally used in current dialogue systems. Due to the unavailability of data for this new application, we construct a new large-scale data collection GraphMovie from a movie website where end- users can upload their narratives freely when watching a movie. Experimental results on the dataset show that our proposed approach based on narratives significantly outperforms the baselines that simply use the narrative as a kind of context.


pdf bib
Mutux at SemEval-2018 Task 1: Exploring Impacts of Context Information On Emotion Detection
Pan Du | Jian-Yun Nie
Proceedings of the 12th International Workshop on Semantic Evaluation

This paper describes MuTuX, our system that is designed for task 1-5a, emotion classification analysis of tweets on SemEval2018. The system aims at exploring the potential of context information of terms for emotion analysis. A Recurrent Neural Network is adopted to capture the context information of terms in tweets. Only term features and the sequential relations are used in our system. The results submitted ranks 16th out of 35 systems on the task of emotion detection in English-language tweets.


pdf bib
TJUdeM: A Combination Classifier for Aspect Category Detection and Sentiment Polarity Classification
Zhifei Zhang | Jian-Yun Nie | Hongling Wang
Proceedings of the 9th International Workshop on Semantic Evaluation (SemEval 2015)

pdf bib
A Neural Network Approach to Context-Sensitive Generation of Conversational Responses
Alessandro Sordoni | Michel Galley | Michael Auli | Chris Brockett | Yangfeng Ji | Margaret Mitchell | Jian-Yun Nie | Jianfeng Gao | Bill Dolan
Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies


pdf bib
Bridging the Gap between Intrinsic and Perceived Relevance in Snippet Generation
Jing He | Pablo Duboue | Jian-Yun Nie
Proceedings of COLING 2012


pdf bib
Summarize What You Are Interested In: An Optimization Framework for Interactive Personalized Summarization
Rui Yan | Jian-Yun Nie | Xiaoming Li
Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing


pdf bib
Towards an optimal weighting of context words based on distance
Bernard Brosseau-Villeneuve | Jian-Yun Nie | Noriko Kando
Proceedings of the 23rd International Conference on Computational Linguistics (Coling 2010)

pdf bib
RALI: Automatic Weighting of Text Window Distances
Bernard Brosseau-Villeneuve | Noriko Kando | Jian-Yun Nie
Proceedings of the 5th International Workshop on Semantic Evaluation

pdf bib
Positional Language Models for Clinical Information Retrieval
Florian Boudin | Jian-Yun Nie | Martin Dawes
Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing

pdf bib
Clinical Information Retrieval using Document and PICO Structure
Florian Boudin | Jian-Yun Nie | Martin Dawes
Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics


pdf bib
Search Engine Adaptation by Feedback Control Adjustment for Time-sensitive Query
Ruiqiang Zhang | Yi Chang | Zhaohui Zheng | Donald Metzler | Jian-yun Nie
Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics, Companion Volume: Short Papers


pdf bib
Selecting Query Term Alternations for Web Search by Exploiting Query Contexts
Guihong Cao | Stephen Robertson | Jian-Yun Nie
Proceedings of ACL-08: HLT

pdf bib
A Comparative Study for Query Translation using Linear Combination and Confidence Measure
Youssef Kadri | Jian-Yun Nie
Proceedings of the Third International Joint Conference on Natural Language Processing: Volume-I


pdf bib
A system to mine large-scale bilingual dictionaries from monolingual web pages
Guihong Cao | Jianfeng Gao | Jian-Yun Nie
Proceedings of Machine Translation Summit XI: Papers


pdf bib
Effective Stemming for Arabic Information Retrieval
Youssef Kadri | Jian-Yun Nie
Proceedings of the International Conference on the Challenge of Arabic for NLP/MT

Arabic has a very rich and complex morphology. Its appropriate morphological processing is very important for Information Retrieval (IR). In this paper, we propose a new stemming technique that tries to determine the stem of a word representing the semantic core of this word according to Arabic morphology. This method is compared to a commonly used light stemming technique which truncates a word by simple rules. Our tests on TREC collections show that the new stemming technique is more effective than the light stemming.

pdf bib
An Iterative Implicit Feedback Approach to Personalized Search
Yuanhua Lv | Le Sun | Junlin Zhang | Jian-Yun Nie | Wan Chen | Wei Zhang
Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics

pdf bib
An Information-Theoretic Approach to Automatic Evaluation of Summaries
Chin-Yew Lin | Guihong Cao | Jianfeng Gao | Jian-Yun Nie
Proceedings of the Human Language Technology Conference of the NAACL, Main Conference

pdf bib
Context-Dependent Term Relations for Information Retrieval
Jing Bai | Jian-Yun Nie | Guihong Cao
Proceedings of the 2006 Conference on Empirical Methods in Natural Language Processing


pdf bib
Mots composés dans les modèles de langue pour la recherche d’information
Carmen Alvarez | Philippe Langlais | Jian-Yun Nie
Actes de la 11ème conférence sur le Traitement Automatique des Langues Naturelles. Posters

Une approche classique en recherche d’information (RI) consiste à bâtir une représentation des documents et des requêtes basée sur les mots simples les constituant. L’utilisation de modèles bigrammes a été étudiée, mais les contraintes sur l’ordre et l’adjacence des mots dans ces travaux ne sont pas toujours justifiées pour la recherche d’information. Nous proposons une nouvelle approche basée sur les modèles de langue qui incorporent des affinités lexicales (ALs), c’est à dire des paires non ordonnées de mots qui se trouvent proches dans un texte. Nous décrivons ce modèle et le comparons aux plus traditionnels modèles unigrammes et bigrammes ainsi qu’au modèle vectoriel.


pdf bib
Embedding Web-Based Statistical Translation Models in Cross-Language Information Retrieval
Wessel Kraaij | Jian-Yun Nie | Michel Simard
Computational Linguistics, Volume 29, Number 3, September 2003: Special Issue on the Web as Corpus


pdf bib
Automatic construction of parallel English-Chinese corpus for cross-language information retrieval
Jiang Chen | Jian-Yun Nie
Sixth Applied Natural Language Processing Conference


pdf bib
Using a Probabilistic Translation Model for Cross-Language Information Retrieval
Jian-Yun Nie | Pierre Isabelle | George Foster
Sixth Workshop on Very Large Corpora


pdf bib
A Unifying Approach To Segmentation Of Chinese And Its Application To Text Retrieval
Jian-Yun Nie | Xiaobo Ren | Martin Brisebois
Proceedings of Rocling VIII Computational Linguistics Conference VIII