Investigating large language models for their competence in extracting grammatically sound sentences from transcribed noisy utterances

Alina Wróblewska


Abstract
Selectively processing noisy utterances while effectively disregarding speech-specific elements poses no considerable challenge for humans, as they exhibit remarkable cognitive abilities to separate semantically significant content from speech-specific noise (i.e. filled pauses, disfluencies, and restarts). These abilities may be driven by mechanisms based on acquired grammatical rules that compose abstract syntactic-semantic structures within utterances. Segments without syntactic and semantic significance are consistently disregarded in these structures. The structures, in tandem with lexis, likely underpin language comprehension and thus facilitate effective communication.In our study, grounded in linguistically motivated experiments, we investigate whether large language models (LLMs) can effectively perform analogical speech comprehension tasks. In particular, we examine the ability of LLMs to extract well-structured utterances from transcriptions of noisy dialogues. We conduct two evaluation experiments in the Polish language scenario, using a dataset presumably unfamiliar to LLMs to mitigate the risk of data contamination. Our results show that not all extracted utterances are correctly structured, indicating that either LLMs do not fully acquire syntactic-semantic rules or they acquire them but cannot apply them effectively. We conclude that the ability of LLMs to comprehend noisy utterances is still relatively superficial compared to human proficiency in processing them.
Anthology ID:
2024.conll-1.2
Volume:
Proceedings of the 28th Conference on Computational Natural Language Learning
Month:
November
Year:
2024
Address:
Miami, FL, USA
Editors:
Libby Barak, Malihe Alikhani
Venue:
CoNLL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
10–23
Language:
URL:
https://aclanthology.org/2024.conll-1.2
DOI:
Bibkey:
Cite (ACL):
Alina Wróblewska. 2024. Investigating large language models for their competence in extracting grammatically sound sentences from transcribed noisy utterances. In Proceedings of the 28th Conference on Computational Natural Language Learning, pages 10–23, Miami, FL, USA. Association for Computational Linguistics.
Cite (Informal):
Investigating large language models for their competence in extracting grammatically sound sentences from transcribed noisy utterances (Wróblewska, CoNLL 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.conll-1.2.pdf