Yuanchao Liu


2022

pdf bib
ITNLP2022 at SemEval-2022 Task 8: Pre-trained Model with Data Augmentation and Voting for Multilingual News Similarity
Zhongan Chen | Weiwei Chen | YunLong Sun | Hongqing Xu | Shuzhe Zhou | Bohan Chen | Chengjie Sun | Yuanchao Liu
Proceedings of the 16th International Workshop on Semantic Evaluation (SemEval-2022)

This article introduces a system to solve the SemEval 2022 Task 8: Multilingual News Article Similarity. The task focuses on the consistency of events reported in two news articles. The system consists of a pre-trained model(e.g., INFOXLM and XLM-RoBERTa) to extract multilingual news features, following fully-connected networks to measure the similarity. In addition, data augmentation and Ten Fold Voting are used to enhance the model. Our final submitted model is an ensemble of three base models, with a Pearson value of 0.784 on the test dataset.

2020

pdf bib
Enhancing Extractive Text Summarization with Topic-Aware Graph Neural Networks
Peng Cui | Le Hu | Yuanchao Liu
Proceedings of the 28th International Conference on Computational Linguistics

Text summarization aims to compress a textual document to a short summary while keeping salient information. Extractive approaches are widely used in text summarization because of their fluency and efficiency. However, most of existing extractive models hardly capture inter-sentence relationships, particularly in long documents. They also often ignore the effect of topical information on capturing important contents. To address these issues, this paper proposes a graph neural network (GNN)-based extractive summarization model, enabling to capture inter-sentence relationships efficiently via graph-structured document representation. Moreover, our model integrates a joint neural topic model (NTM) to discover latent topics, which can provide document-level features for sentence selection. The experimental results demonstrate that our model not only substantially achieves state-of-the-art results on CNN/DM and NYT datasets but also considerably outperforms existing approaches on scientific paper datasets consisting of much longer documents, indicating its better robustness in document genres and lengths. Further discussions show that topical information can help the model preselect salient contents from an entire document, which interprets its effectiveness in long document summarization.

pdf bib
CN-HIT-IT.NLP at SemEval-2020 Task 4: Enhanced Language Representation with Multiple Knowledge Triples
Yice Zhang | Jiaxuan Lin | Yang Fan | Peng Jin | Yuanchao Liu | Bingquan Liu
Proceedings of the Fourteenth Workshop on Semantic Evaluation

This paper describes our system that participated in the SemEval-2020 task 4: Commonsense Validation and Explanation. For this task, it is obvious that external knowledge, such as Knowledge graph, can help the model understand commonsense in natural language statements. But how to select the right triples for statements remains unsolved, so how to reduce the interference of irrelevant triples on model performance is a research focus. This paper adopt a modified K-BERT as the language encoder, to enhance language representation through triples from knowledge graphs. Experiments show that our method is better than models without external knowledge, and is slightly better than the original K-BERT. We got an accuracy score of 0.97 in subtaskA, ranking 1/45, and got an accuracy score of 0.948, ranking 2/35.

2019

pdf bib
Neural-based Chinese Idiom Recommendation for Enhancing Elegance in Essay Writing
Yuanchao Liu | Bo Pang | Bingquan Liu
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics

Although the proper use of idioms can enhance the elegance of writing, the active use of various expressions is a challenge because remembering idioms is difficult. In this study, we address the problem of idiom recommendation by leveraging a neural machine translation framework, in which we suppose that idioms are written with one pseudo target language. Two types of real-life datasets are collected to support this study. Experimental results show that the proposed approach achieves promising performance compared with other baseline methods.

2017

pdf bib
Predicting Users’ Negative Feedbacks in Multi-Turn Human-Computer Dialogues
Xin Wang | Jianan Wang | Yuanchao Liu | Xiaolong Wang | Zhuoran Wang | Baoxun Wang
Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 1: Long Papers)

User experience is essential for human-computer dialogue systems. However, it is impractical to ask users to provide explicit feedbacks when the agents’ responses displease them. Therefore, in this paper, we explore to predict users’ imminent dissatisfactions caused by intelligent agents by analysing the existing utterances in the dialogue sessions. To our knowledge, this is the first work focusing on this task. Several possible factors that trigger negative emotions are modelled. A relation sequence model (RSM) is proposed to encode the sequence of appropriateness of current response with respect to the earlier utterances. The experimental results show that the proposed structure is effective in modelling emotional risk (possibility of negative feedback) than existing conversation modelling approaches. Besides, strategies of obtaining distance supervision data for pre-training are also discussed in this work. Balanced sampling with respect to the last response in the distance supervision data are shown to be reliable for data augmentation.

2015

pdf bib
Predicting Polarities of Tweets by Composing Word Embeddings with Long Short-Term Memory
Xin Wang | Yuanchao Liu | Chengjie Sun | Baoxun Wang | Xiaolong Wang
Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)

2014

pdf bib
WINGS:Writing with Intelligent Guidance and Suggestions
Xianjun Dai | Yuanchao Liu | Xiaolong Wang | Bingquan Liu
Proceedings of 52nd Annual Meeting of the Association for Computational Linguistics: System Demonstrations

2013

pdf bib
PAL: A Chatterbot System for Answering Domain-specific Questions
Yuanchao Liu | Ming Liu | Xiaolong Wang | Limin Wang | Jingjing Li
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics: System Demonstrations

2010

pdf bib
Research of People Disambiguation by Combining Multiple knowledges
Erlei Ma | Yuanchao Liu
CIPS-SIGHAN Joint Conference on Chinese Language Processing