Conciseness: An Overlooked Language Task

Felix Stahlberg, Aashish Kumar, Chris Alberti, Shankar Kumar


Abstract
We report on novel investigations into training models that make sentences concise. We define the task and show that it is different from related tasks such as summarization and simplification. For evaluation, we release two test sets, consisting of 2000 sentences each, that were annotated by two and five human annotators, respectively. We demonstrate that conciseness is a difficult task for which zero-shot setups with large neural language models often do not perform well. Given the limitations of these approaches, we propose a synthetic data generation method based on round-trip translations. Using this data to either train Transformers from scratch or fine-tune T5 models yields our strongest baselines that can be further improved by fine-tuning on an artificial conciseness dataset that we derived from multi-annotator machine translation test sets.
Anthology ID:
2022.tsar-1.5
Volume:
Proceedings of the Workshop on Text Simplification, Accessibility, and Readability (TSAR-2022)
Month:
December
Year:
2022
Address:
Abu Dhabi, United Arab Emirates (Virtual)
Editors:
Sanja Štajner, Horacio Saggion, Daniel Ferrés, Matthew Shardlow, Kim Cheng Sheang, Kai North, Marcos Zampieri, Wei Xu
Venue:
TSAR
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
43–56
Language:
URL:
https://aclanthology.org/2022.tsar-1.5
DOI:
10.18653/v1/2022.tsar-1.5
Bibkey:
Cite (ACL):
Felix Stahlberg, Aashish Kumar, Chris Alberti, and Shankar Kumar. 2022. Conciseness: An Overlooked Language Task. In Proceedings of the Workshop on Text Simplification, Accessibility, and Readability (TSAR-2022), pages 43–56, Abu Dhabi, United Arab Emirates (Virtual). Association for Computational Linguistics.
Cite (Informal):
Conciseness: An Overlooked Language Task (Stahlberg et al., TSAR 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.tsar-1.5.pdf
Video:
 https://aclanthology.org/2022.tsar-1.5.mp4