Dialogue Summarization using BART

Conrad Lundberg, Leyre Sánchez Viñuela, Siena Biales


Abstract
This paper introduces the model and settings submitted to the INLG 2022 DialogSum Chal- lenge, a shared task to generate summaries of real-life scenario dialogues between two peo- ple. In this paper, we explored using interme- diate task transfer learning, reported speech, and the use of a supplementary dataset in addi- tion to our base fine-tuned BART model. How- ever, we did not use such a method in our final model, as none improved our results. Our final model for this dialogue task achieved scores only slightly below the top submission, with hidden test set scores of 49.62, 24.98, 46.25 and 91.54 for ROUGE-1, ROUGE-2, ROUGE-L and BERTSCORE respectively. The top submitted models will also receive human evaluation.
Anthology ID:
2022.inlg-genchal.17
Volume:
Proceedings of the 15th International Conference on Natural Language Generation: Generation Challenges
Month:
July
Year:
2022
Address:
Waterville, Maine, USA and virtual meeting
Editors:
Samira Shaikh, Thiago Ferreira, Amanda Stent
Venue:
INLG
SIG:
SIGGEN
Publisher:
Association for Computational Linguistics
Note:
Pages:
121–125
Language:
URL:
https://aclanthology.org/2022.inlg-genchal.17
DOI:
Bibkey:
Cite (ACL):
Conrad Lundberg, Leyre Sánchez Viñuela, and Siena Biales. 2022. Dialogue Summarization using BART. In Proceedings of the 15th International Conference on Natural Language Generation: Generation Challenges, pages 121–125, Waterville, Maine, USA and virtual meeting. Association for Computational Linguistics.
Cite (Informal):
Dialogue Summarization using BART (Lundberg et al., INLG 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.inlg-genchal.17.pdf