Yu Shi


2021

pdf bib
基于义原表示学习的词向量表示方法(Word Representation based on Sememe Representation Learning)
Ning Yu (于宁) | Jiangping Wang (王江萍) | Yu Shi (石宇) | Jianyi Liu (刘建毅)
Proceedings of the 20th Chinese National Conference on Computational Linguistics

本文利用知网(HowNet)中的知识,并将Word2vec模型的结构和思想迁移至义原表示学习过程中,提出了一个基于义原表示学习的词向量表示方法。首先,本文利用OpenHowNet获取义原知识库中的所有义原、所有中文词汇以及所有中文词汇和其对应的义原集合,作为实验的数据集。然后,基于Skip-gram模型,训练义原表示学习模型,进而获得词向量。最后,通过词相似度任务、词义消歧任务、词汇类比和观察最近邻义原,来评价本文提出的方法获取的词向量的效果。通过和基线模型比较,发现本文提出的方法既高效又准确,不依赖大规模语料也不需要复杂的网络结构和繁多的参数,也能提升各种自然语言处理任务的准确率。

2020

pdf bib
Mixed-Lingual Pre-training for Cross-lingual Summarization
Ruochen Xu | Chenguang Zhu | Yu Shi | Michael Zeng | Xuedong Huang
Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing

Cross-lingual Summarization (CLS) aims at producing a summary in the target language for an article in the source language. Traditional solutions employ a two-step approach, i.e. translate -> summarize or summarize -> translate. Recently, end-to-end models have achieved better results, but these approaches are mostly limited by their dependence on large-scale labeled data. We propose a solution based on mixed-lingual pre-training that leverages both cross-lingual tasks such as translation and monolingual tasks like masked language models. Thus, our model can leverage the massive monolingual data to enhance its modeling of language. Moreover, the architecture has no task-specific components, which saves memory and increases optimization efficiency. We show in experiments that this pre-training scheme can effectively boost the performance of cross-lingual summarization. In NCLS dataset, our model achieves an improvement of 2.82 (English to Chinese) and 1.15 (Chinese to English) ROUGE-1 scores over state-of-the-art results.

pdf bib
MaP: A Matrix-based Prediction Approach to Improve Span Extraction in Machine Reading Comprehension
Huaishao Luo | Yu Shi | Ming Gong | Linjun Shou | Tianrui Li
Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing

Span extraction is an essential problem in machine reading comprehension. Most of the existing algorithms predict the start and end positions of an answer span in the given corresponding context by generating two probability vectors. In this paper, we propose a novel approach that extends the probability vector to a probability matrix. Such a matrix can cover more start-end position pairs. Precisely, to each possible start index, the method always generates an end probability vector. Besides, we propose a sampling-based training strategy to address the computational cost and memory issue in the matrix training phase. We evaluate our method on SQuAD 1.1 and three other question answering benchmarks. Leveraging the most competitive models BERT and BiDAF as the backbone, our proposed approach can get consistent improvements in all datasets, demonstrating the effectiveness of the proposed method.