FPAI at SemEval-2020 Task 10: A Query Enhanced Model with RoBERTa for Emphasis Selection

Chenyang Guo, Xiaolong Hou, Junsong Ren, Lianxin Jiang, Yang Mo, Haiqin Yang, Jianping Shen


Abstract
This paper describes the model we apply in the SemEval-2020 Task 10. We formalize the task of emphasis selection as a simplified query-based machine reading comprehension (MRC) task, i.e. answering a fixed question of “Find candidates for emphasis”. We propose our subword puzzle encoding mechanism and subword fusion layer to align and fuse subwords. By introducing the semantic prior knowledge of the informative query and some other techniques, we attain the 7th place during the evaluation phase and the first place during train phase.
Anthology ID:
2020.semeval-1.215
Volume:
Proceedings of the Fourteenth Workshop on Semantic Evaluation
Month:
December
Year:
2020
Address:
Barcelona (online)
Editors:
Aurelie Herbelot, Xiaodan Zhu, Alexis Palmer, Nathan Schneider, Jonathan May, Ekaterina Shutova
Venue:
SemEval
SIG:
SIGLEX
Publisher:
International Committee for Computational Linguistics
Note:
Pages:
1652–1657
Language:
URL:
https://aclanthology.org/2020.semeval-1.215
DOI:
10.18653/v1/2020.semeval-1.215
Bibkey:
Cite (ACL):
Chenyang Guo, Xiaolong Hou, Junsong Ren, Lianxin Jiang, Yang Mo, Haiqin Yang, and Jianping Shen. 2020. FPAI at SemEval-2020 Task 10: A Query Enhanced Model with RoBERTa for Emphasis Selection. In Proceedings of the Fourteenth Workshop on Semantic Evaluation, pages 1652–1657, Barcelona (online). International Committee for Computational Linguistics.
Cite (Informal):
FPAI at SemEval-2020 Task 10: A Query Enhanced Model with RoBERTa for Emphasis Selection (Guo et al., SemEval 2020)
Copy Citation:
PDF:
https://aclanthology.org/2020.semeval-1.215.pdf