Leveraging pre-trained large language models for aphasia detection in English and Chinese speakers

Yan Cong, Jiyeon Lee, Arianna LaCroix


Abstract
We explore the utility of pre-trained Large Language Models (LLMs) in detecting the presence, subtypes, and severity of aphasia across English and Mandarin Chinese speakers. Our investigation suggests that even without fine-tuning or domain-specific training, pre-trained LLMs can offer some insights on language disorders, regardless of speakers’ first language. Our analysis also reveals noticeable differences between English and Chinese LLMs. While the English LLMs exhibit near-chance level accuracy in subtyping aphasia, the Chinese counterparts demonstrate less than satisfactory performance in distinguishing between individuals with and without aphasia. This research advocates for the importance of linguistically tailored and specified approaches in leveraging LLMs for clinical applications, especially in the context of multilingual populations.
Anthology ID:
2024.clinicalnlp-1.20
Volume:
Proceedings of the 6th Clinical Natural Language Processing Workshop
Month:
June
Year:
2024
Address:
Mexico City, Mexico
Editors:
Tristan Naumann, Asma Ben Abacha, Steven Bethard, Kirk Roberts, Danielle Bitterman
Venues:
ClinicalNLP | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
238–245
Language:
URL:
https://aclanthology.org/2024.clinicalnlp-1.20
DOI:
10.18653/v1/2024.clinicalnlp-1.20
Bibkey:
Cite (ACL):
Yan Cong, Jiyeon Lee, and Arianna LaCroix. 2024. Leveraging pre-trained large language models for aphasia detection in English and Chinese speakers. In Proceedings of the 6th Clinical Natural Language Processing Workshop, pages 238–245, Mexico City, Mexico. Association for Computational Linguistics.
Cite (Informal):
Leveraging pre-trained large language models for aphasia detection in English and Chinese speakers (Cong et al., ClinicalNLP-WS 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.clinicalnlp-1.20.pdf