Sammy Floyd
2023
A fine-grained comparison of pragmatic language understanding in humans and language models
Jennifer Hu
|
Sammy Floyd
|
Olessia Jouravlev
|
Evelina Fedorenko
|
Edward Gibson
Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Pragmatics and non-literal language understanding are essential to human communication, and present a long-standing challenge for artificial language models. We perform a fine-grained comparison of language models and humans on seven pragmatic phenomena, using zero-shot prompting on an expert-curated set of English materials. We ask whether models (1) select pragmatic interpretations of speaker utterances, (2) make similar error patterns as humans, and (3) use similar linguistic cues as humans to solve the tasks. We find that the largest models achieve high accuracy and match human error patterns: within incorrect responses, models favor literal interpretations over heuristic-based distractors. We also find preliminary evidence that models and humans are sensitive to similar linguistic cues. Our results suggest that pragmatic behaviors can emerge in models without explicitly constructed representations of mental states. However, models tend to struggle with phenomena relying on social expectation violations.
2019
Modeling the Acquisition of Words with Multiple Meanings
Libby Barak
|
Sammy Floyd
|
Adele Goldberg
Proceedings of the Society for Computation in Linguistics (SCiL) 2019
Polysemous Language in Child Directed Speech
Sammy Floyd
|
Libby Barak
|
Adele Goldberg
|
Casey Lew-Williams
Proceedings of the 2019 Workshop on Widening NLP
Polysemous Language in Child Directed Speech Learning the meaning of words is one of the fundamental building blocks of verbal communication. Models of child language acquisition have generally made the simplifying assumption that each word appears in child-directed speech with a single meaning. To understand naturalistic word learning during childhood, it is essential to know whether children hear input that is in fact constrained to single meaning per word, or whether the environment naturally contains multiple senses. In this study, we use a topic modeling approach to automatically induce word senses from child-directed speech. Our results confirm the plausibility of our automated analysis approach and reveal an increasing rate of using multiple senses in child-directed speech, starting with corpora from children as early as the first year of life.
Search
Co-authors
- Libby Barak 2
- Adele Goldberg 2
- Jennifer Hu 1
- Olessia Jouravlev 1
- Evelina Fedorenko 1
- show all...