MultiTurnCleanup: A Benchmark for Multi-Turn Spoken Conversational Transcript Cleanup

Hua Shen, Vicky Zayats, Johann Rocholl, Daniel Walker, Dirk Padfield


Abstract
Current disfluency detection models focus on individual utterances each from a single speaker. However, numerous discontinuity phenomena in spoken conversational transcripts occur across multiple turns, which can not be identified by disfluency detection models. This study addresses these phenomena by proposing an innovative Multi-Turn Cleanup task for spoken conversational transcripts and collecting a new dataset, MultiTurnCleanup. We design a data labeling schema to collect the high-quality dataset and provide extensive data analysis. Furthermore, we leverage two modeling approaches for experimental evaluation as benchmarks for future research.
Anthology ID:
2023.emnlp-main.613
Volume:
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing
Month:
December
Year:
2023
Address:
Singapore
Editors:
Houda Bouamor, Juan Pino, Kalika Bali
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
9895–9903
Language:
URL:
https://aclanthology.org/2023.emnlp-main.613
DOI:
10.18653/v1/2023.emnlp-main.613
Bibkey:
Cite (ACL):
Hua Shen, Vicky Zayats, Johann Rocholl, Daniel Walker, and Dirk Padfield. 2023. MultiTurnCleanup: A Benchmark for Multi-Turn Spoken Conversational Transcript Cleanup. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 9895–9903, Singapore. Association for Computational Linguistics.
Cite (Informal):
MultiTurnCleanup: A Benchmark for Multi-Turn Spoken Conversational Transcript Cleanup (Shen et al., EMNLP 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.emnlp-main.613.pdf
Video:
 https://aclanthology.org/2023.emnlp-main.613.mp4