Who needs context? Classical techniques for Alzheimer’s disease detection

Behrad Taghibeyglou, Frank Rudzicz


Abstract
Natural language processing (NLP) has shown great potential for Alzheimer’s disease (AD) detection, particularly due to the adverse effect of AD on spontaneous speech. The current body of literature has directed attention toward context-based models, especially Bidirectional Encoder Representations from Transformers (BERTs), owing to their exceptional abilities to integrate contextual information in a wide range of NLP tasks. This comes at the cost of added model opacity and computational requirements. Taking this into consideration, we propose a Word2Vec-based model for AD detection in 108 age- and sex-matched participants who were asked to describe the Cookie Theft picture. We also investigate the effectiveness of our model by fine-tuning BERT-based sequence classification models, as well as incorporating linguistic features. Our results demonstrate that our lightweight and easy-to-implement model outperforms some of the state-of-the-art models available in the literature, as well as BERT models.
Anthology ID:
2023.clinicalnlp-1.13
Volume:
Proceedings of the 5th Clinical Natural Language Processing Workshop
Month:
July
Year:
2023
Address:
Toronto, Canada
Editors:
Tristan Naumann, Asma Ben Abacha, Steven Bethard, Kirk Roberts, Anna Rumshisky
Venue:
ClinicalNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
102–107
Language:
URL:
https://aclanthology.org/2023.clinicalnlp-1.13
DOI:
10.18653/v1/2023.clinicalnlp-1.13
Bibkey:
Cite (ACL):
Behrad Taghibeyglou and Frank Rudzicz. 2023. Who needs context? Classical techniques for Alzheimer’s disease detection. In Proceedings of the 5th Clinical Natural Language Processing Workshop, pages 102–107, Toronto, Canada. Association for Computational Linguistics.
Cite (Informal):
Who needs context? Classical techniques for Alzheimer’s disease detection (Taghibeyglou & Rudzicz, ClinicalNLP 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.clinicalnlp-1.13.pdf