Task Proposal: The TL;DR Challenge

Shahbaz Syed, Michael Völske, Martin Potthast, Nedim Lipka, Benno Stein, Hinrich Schütze


Abstract
The TL;DR challenge fosters research in abstractive summarization of informal text, the largest and fastest-growing source of textual data on the web, which has been overlooked by summarization research so far. The challenge owes its name to the frequent practice of social media users to supplement long posts with a “TL;DR”—for “too long; didn’t read”—followed by a short summary as a courtesy to those who would otherwise reply with the exact same abbreviation to indicate they did not care to read a post for its apparent length. Posts featuring TL;DR summaries form an excellent ground truth for summarization, and by tapping into this resource for the first time, we have mined millions of training examples from social media, opening the door to all kinds of generative models.
Anthology ID:
W18-6538
Volume:
Proceedings of the 11th International Conference on Natural Language Generation
Month:
November
Year:
2018
Address:
Tilburg University, The Netherlands
Editors:
Emiel Krahmer, Albert Gatt, Martijn Goudbeek
Venue:
INLG
SIG:
SIGGEN
Publisher:
Association for Computational Linguistics
Note:
Pages:
318–321
Language:
URL:
https://aclanthology.org/W18-6538
DOI:
10.18653/v1/W18-6538
Bibkey:
Cite (ACL):
Shahbaz Syed, Michael Völske, Martin Potthast, Nedim Lipka, Benno Stein, and Hinrich Schütze. 2018. Task Proposal: The TL;DR Challenge. In Proceedings of the 11th International Conference on Natural Language Generation, pages 318–321, Tilburg University, The Netherlands. Association for Computational Linguistics.
Cite (Informal):
Task Proposal: The TL;DR Challenge (Syed et al., INLG 2018)
Copy Citation:
PDF:
https://aclanthology.org/W18-6538.pdf