Pragmatics in Language Grounding: Phenomena, Tasks, and Modeling Approaches

Daniel Fried, Nicholas Tomlin, Jennifer Hu, Roma Patel, Aida Nematzadeh


Abstract
People rely heavily on context to enrich meaning beyond what is literally said, enabling concise but effective communication. To interact successfully and naturally with people, user-facing artificial intelligence systems will require similar skills in pragmatics: relying on various types of context — from shared linguistic goals and conventions, to the visual and embodied world — to use language effectively. We survey existing grounded settings and pragmatic modeling approaches and analyze how the task goals, environmental contexts, and communicative affordances in each work enrich linguistic meaning. We present recommendations for future grounded task design to naturally elicit pragmatic phenomena, and suggest directions that focus on a broader range of communicative contexts and affordances.
Anthology ID:
2023.findings-emnlp.840
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2023
Month:
December
Year:
2023
Address:
Singapore
Editors:
Houda Bouamor, Juan Pino, Kalika Bali
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
12619–12640
Language:
URL:
https://aclanthology.org/2023.findings-emnlp.840
DOI:
10.18653/v1/2023.findings-emnlp.840
Bibkey:
Cite (ACL):
Daniel Fried, Nicholas Tomlin, Jennifer Hu, Roma Patel, and Aida Nematzadeh. 2023. Pragmatics in Language Grounding: Phenomena, Tasks, and Modeling Approaches. In Findings of the Association for Computational Linguistics: EMNLP 2023, pages 12619–12640, Singapore. Association for Computational Linguistics.
Cite (Informal):
Pragmatics in Language Grounding: Phenomena, Tasks, and Modeling Approaches (Fried et al., Findings 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.findings-emnlp.840.pdf