Aida Davani
2024
Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 6: Industry Track)
Yi Yang
|
Aida Davani
|
Avi Sil
|
Anoop Kumar
Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 6: Industry Track)
2019
Modeling performance differences on cognitive tests using LSTMs and skip-thought vectors trained on reported media consumption.
Maury Courtland
|
Aida Davani
|
Melissa Reyes
|
Leigh Yeh
|
Jun Leung
|
Brendan Kennedy
|
Morteza Dehghani
|
Jason Zevin
Proceedings of the Third Workshop on Natural Language Processing and Computational Social Science
Cognitive tests have traditionally resorted to standardizing testing materials in the name of equality and because of the onerous nature of creating test items. This approach ignores participants’ diverse language experiences that potentially significantly affect testing outcomes. Here, we seek to explain our prior finding of significant performance differences on two cognitive tests (reading span and SPiN) between clusters of participants based on their media consumption. Here, we model the language contained in these media sources using an LSTM trained on corpora of each cluster’s media sources to predict target words. We also model semantic similarity of test items with each cluster’s corpus using skip-thought vectors. We find robust, significant correlations between performance on the SPiN test and the LSTMs and skip-thought models we present here, but not the reading span test.
Search
Co-authors
- Yi Yang 1
- Avirup Sil 1
- Anoop Kumar 1
- Maury Courtland 1
- Melissa Reyes 1
- show all...