Daniel Walker
2023
MultiTurnCleanup: A Benchmark for Multi-Turn Spoken Conversational Transcript Cleanup
Hua Shen
|
Vicky Zayats
|
Johann Rocholl
|
Daniel Walker
|
Dirk Padfield
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing
Current disfluency detection models focus on individual utterances each from a single speaker. However, numerous discontinuity phenomena in spoken conversational transcripts occur across multiple turns, which can not be identified by disfluency detection models. This study addresses these phenomena by proposing an innovative Multi-Turn Cleanup task for spoken conversational transcripts and collecting a new dataset, MultiTurnCleanup. We design a data labeling schema to collect the high-quality dataset and provide extensive data analysis. Furthermore, we leverage two modeling approaches for experimental evaluation as benchmarks for future research.
2022
Teaching BERT to Wait: Balancing Accuracy and Latency for Streaming Disfluency Detection
Angelica Chen
|
Vicky Zayats
|
Daniel Walker
|
Dirk Padfield
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
In modern interactive speech-based systems, speech is consumed and transcribed incrementally prior to having disfluencies removed. While this post-processing step is crucial for producing clean transcripts and high performance on downstream tasks (e.g. machine translation), most current state-of-the-art NLP models such as the Transformer operate non-incrementally, potentially causing unacceptable delays for the user. In this work we propose a streaming BERT-based sequence tagging model that, combined with a novel training objective, is capable of detecting disfluencies in real-time while balancing accuracy and latency. This is accomplished by training the model to decide whether to immediately output a prediction for the current input or to wait for further context, in essence learning to dynamically size the lookahead window. Our results demonstrate that our model produces comparably accurate predictions and does so sooner than our baselines, with lower flicker. Furthermore, the model attains state-of-the-art latency and stability scores when compared with recent work on incremental disfluency detection.
2010
Evaluating Models of Latent Document Semantics in the Presence of OCR Errors
Daniel Walker
|
William B. Lund
|
Eric K. Ringger
Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing
Search
Co-authors
- Vicky Zayats 2
- Dirk Padfield 2
- Angelica Chen 1
- Hua Shen 1
- Johann Rocholl 1
- show all...