Dimsum @LaySumm 20

Tiezheng Yu, Dan Su, Wenliang Dai, Pascale Fung


Abstract
Lay summarization aims to generate lay summaries of scientific papers automatically. It is an essential task that can increase the relevance of science for all of society. In this paper, we build a lay summary generation system based on BART model. We leverage sentence labels as extra supervision signals to improve the performance of lay summarization. In the CL-LaySumm 2020 shared task, our model achieves 46.00 Rouge1-F1 score.
Anthology ID:
2020.sdp-1.35
Volume:
Proceedings of the First Workshop on Scholarly Document Processing
Month:
November
Year:
2020
Address:
Online
Editors:
Muthu Kumar Chandrasekaran, Anita de Waard, Guy Feigenblat, Dayne Freitag, Tirthankar Ghosal, Eduard Hovy, Petr Knoth, David Konopnicki, Philipp Mayr, Robert M. Patton, Michal Shmueli-Scheuer
Venue:
sdp
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
303–309
Language:
URL:
https://aclanthology.org/2020.sdp-1.35
DOI:
10.18653/v1/2020.sdp-1.35
Bibkey:
Cite (ACL):
Tiezheng Yu, Dan Su, Wenliang Dai, and Pascale Fung. 2020. Dimsum @LaySumm 20. In Proceedings of the First Workshop on Scholarly Document Processing, pages 303–309, Online. Association for Computational Linguistics.
Cite (Informal):
Dimsum @LaySumm 20 (Yu et al., sdp 2020)
Copy Citation:
PDF:
https://aclanthology.org/2020.sdp-1.35.pdf
Video:
 https://slideslive.com/38940741
Code
 TysonYu/Laysumm
Data
ScisummNet