He Shizhu
2023
基于预训练语言模型的端到端概念体系构建方法(End to End Taxonomy Construction Method with Pretrained Language Model)
Wang Siyi (思懿 王)
|
He Shizhu (世柱 何)
|
Liu Kang (康 刘)
|
Zhao Jun (军 赵)
Proceedings of the 22nd Chinese National Conference on Computational Linguistics
“概念体系描述概念间上下文关系并组织为层次结构,是一类重要的知识资源。本文研究概念体系的自动构建技术,致力于把一个给定的概念集合(词语集合)按照上下位关系,组织成树状结构的概念体系(概念树)。传统做法将概念体系构建任务分解为概念间上下位语义关系判断及概念层次结构构建这两个独立的子任务。两个子任务缺乏信息反馈,容易造成错误累积等问题。近年来,越来越多任务使用预训练语言模型获取词语的语义特征并判断词语间的语义关系,虽然在概念体系构建中取得了一定效果,但是这类做法只能建模第一个子任务,依然存在错误累计等问题。为了解决分步式方法存在的错误累计问题并有效获取词语及其关系语义特征,本文提出一种基于预训练语言模型的端到端概念体系构建方法,一方面利用预训练语言模型获取概念及其上下位关系的语义信息和部分概念体系结构的结构信息,另一方面利用强化学习端到端地建模概念关系判断和完整体系结构的生成。在WordNet数据集上的实验表明,本文所提方法能取得了良好效果,同等条件下,我们的F1值比最好的模型有7.3%的相对提升。”
2021
Multi-Strategy Knowledge Distillation Based Teacher-Student Framework for Machine Reading Comprehension
Yu Xiaoyan
|
Liu Qingbin
|
He Shizhu
|
Liu Kang
|
Liu Shengping
|
Zhao Jun
|
Zhou Yongbin
Proceedings of the 20th Chinese National Conference on Computational Linguistics
The irrelevant information in documents poses a great challenge for machine reading compre-hension (MRC). To deal with such a challenge current MRC models generally fall into twoseparate parts: evidence extraction and answer prediction where the former extracts the key evi-dence corresponding to the question and the latter predicts the answer based on those sentences. However such pipeline paradigms tend to accumulate errors i.e. extracting the incorrect evi-dence results in predicting the wrong answer. In order to address this problem we propose aMulti-Strategy Knowledge Distillation based Teacher-Student framework (MSKDTS) for ma-chine reading comprehension. In our approach we first take evidence and document respec-tively as the input reference information to build a teacher model and a student model. Then the multi-strategy knowledge distillation method transfers the knowledge from the teacher model to the student model at both feature and prediction level through knowledge distillation approach. Therefore in the testing phase the enhanced student model can predict answer similar to the teacher model without being aware of which sentence is the corresponding evidence in the docu-ment. Experimental results on the ReCO dataset demonstrate the effectiveness of our approachand further ablation studies prove the effectiveness of both knowledge distillation strategies.
Search
Co-authors
- Liu Kang (康 刘) 2
- Zhao Jun (军 赵) 2
- Wang Siyi (思懿 王) 1
- Yu Xiaoyan 1
- Liu Qingbin 1
- show all...
Venues
- ccl2