Martin Pickering
2024
Do large language models resemble humans in language use?
Zhenguang Cai | Xufeng Duan | David Haslett | Shuqi Wang | Martin Pickering
Proceedings of the Workshop on Cognitive Modeling and Computational Linguistics
Zhenguang Cai | Xufeng Duan | David Haslett | Shuqi Wang | Martin Pickering
Proceedings of the Workshop on Cognitive Modeling and Computational Linguistics
It is unclear whether large language models (LLMs) develop humanlike characteristics in language use. We subjected ChatGPT and Vicuna to 12 pre-registered psycholinguistic experiments ranging from sounds to dialogue. ChatGPT and Vicuna replicated the human pattern of language use in 10 and 7 out of the 12 experiments, respectively. The models associated unfamiliar words with different meanings depending on their forms, continued to access recently encountered meanings of ambiguous words, reused recent sentence structures, attributed causality as a function of verb semantics, and accessed different meanings and retrieved different words depending on an interlocutor’s identity. In addition, ChatGPT, but not Vicuna, nonliterally interpreted implausible sentences that were likely to have been corrupted by noise, drew reasonable inferences, and overlooked semantic fallacies in a sentence. Finally, unlike humans, neither model preferred using shorter words to convey less informative content, nor did they use context to resolve syntactic ambiguities. We discuss how these convergences and divergences may result from the transformer architecture. Overall, these experiments demonstrate that LLMs such as ChatGPT (and Vicuna to a lesser extent) are humanlike in many aspects of human language processing.
2021
Lexical Alignment to Non-native Speakers
Iva Ivanova | Holly Branigan | Janet McLean | Albert Costa | Martin Pickering
Dialogue Discourse Volume 12
Iva Ivanova | Holly Branigan | Janet McLean | Albert Costa | Martin Pickering
Dialogue Discourse Volume 12
Two picture-matching-game experiments investigated if lexical-referential alignment to non-native speakers is enhanced by a desire to aid communicative success (by saying something the conversation partner can certainly understand), a form of audience design. In Experiment 1, a group of native speakers of British English that was not given evidence of their conversation partners’ picture-matching performance showed more alignment to non-native than to native speakers, while another group that was given such evidence aligned equivalently to the two types of speaker. Experiment 2, conducted with speakers of Castilian Spanish, replicated the greater alignment to non-native than native speakers without feedback. However, Experiment 2 also showed that production of grammatical errors by the confederate produced no additional increase of alignment even though making errors suggests lower communicative competence. We suggest that this pattern is consistent with another collaborative strategy, the desire to model correct usage. Together, these results support a role for audience design in alignment to non-native speakers in structured task-based dialogue, but one that is strategically deployed only when deemed necessary.