2020
pdf
bib
abs
IR&TM-NJUST@CLSciSumm 20
Heng Zhang
|
Lifan Liu
|
Ruping Wang
|
Shaohu Hu
|
Shutian Ma
|
Chengzhi Zhang
Proceedings of the First Workshop on Scholarly Document Processing
This paper mainly introduces our methods for Task 1A and Task 1B of CL-SciSumm 2020. Task 1A is to identify reference text in reference paper. Traditional machine learning models and MLP model are used. We evaluate the performances of these models and submit the final results from the optimal model. Compared with previous work, we optimize the ratio of positive to negative examples after data sampling. In order to construct features for classification, we calculate similarities between reference text and candidate sentences based on sentence vectors. Accordingly, nine similarities are used, of which eight are chosen from what we used in CL-SciSumm 2019 and a new sentence similarity based on fastText is added. Task 1B is to classify the facets of reference text. Unlike the methods used in CL-SciSumm 2019, we construct inputs of models based on word vectors and add deep learning models for classification this year.
2019
pdf
bib
abs
Using Human Attention to Extract Keyphrase from Microblog Post
Yingyi Zhang
|
Chengzhi Zhang
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics
This paper studies automatic keyphrase extraction on social media. Previous works have achieved promising results on it, but they neglect human reading behavior during keyphrase annotating. The human attention is a crucial element of human reading behavior. It reveals the relevance of words to the main topics of the target text. Thus, this paper aims to integrate human attention into keyphrase extraction models. First, human attention is represented by the reading duration estimated from eye-tracking corpus. Then, we merge human attention with neural network models by an attention mechanism. In addition, we also integrate human attention into unsupervised models. To the best of our knowledge, we are the first to utilize human attention on keyphrase extraction tasks. The experimental results show that our models have significant improvements on two Twitter datasets.
2018
pdf
bib
abs
Encoding Conversation Context for Neural Keyphrase Extraction from Microblog Posts
Yingyi Zhang
|
Jing Li
|
Yan Song
|
Chengzhi Zhang
Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers)
Existing keyphrase extraction methods suffer from data sparsity problem when they are conducted on short and informal texts, especially microblog messages. Enriching context is one way to alleviate this problem. Considering that conversations are formed by reposting and replying messages, they provide useful clues for recognizing essential content in target posts and are therefore helpful for keyphrase identification. In this paper, we present a neural keyphrase extraction framework for microblog posts that takes their conversation context into account, where four types of neural encoders, namely, averaged embedding, RNN, attention, and memory networks, are proposed to represent the conversation context. Experimental results on Twitter and Weibo datasets show that our framework with such encoders outperforms state-of-the-art approaches.
2013
pdf
bib
Finding More Bilingual Webpages with High Credibility via Link Analysis
Chengzhi Zhang
|
Xuchen Yao
|
Chunyu Kit
Proceedings of the Sixth Workshop on Building and Using Comparable Corpora