Toward Implicit Reference in Dialog: A Survey of Methods and Data

Lindsey Vanderlyn, Talita Anthonio, Daniel Ortega, Michael Roth, Ngoc Thang Vu


Abstract
Communicating efficiently in natural language requires that we often leave information implicit, especially in spontaneous speech. This frequently results in phenomena of incompleteness, such as omitted references, that pose challenges for language processing. In this survey paper, we review the state of the art in research regarding the automatic processing of such implicit references in dialog scenarios, discuss weaknesses with respect to inconsistencies in task definitions and terminologies, and outline directions for future work. Among others, these include a unification of existing tasks, addressing data scarcity, and taking into account model and annotator uncertainties.
Anthology ID:
2022.aacl-main.45
Volume:
Proceedings of the 2nd Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 12th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)
Month:
November
Year:
2022
Address:
Online only
Editors:
Yulan He, Heng Ji, Sujian Li, Yang Liu, Chua-Hui Chang
Venues:
AACL | IJCNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
587–600
Language:
URL:
https://aclanthology.org/2022.aacl-main.45
DOI:
Bibkey:
Cite (ACL):
Lindsey Vanderlyn, Talita Anthonio, Daniel Ortega, Michael Roth, and Ngoc Thang Vu. 2022. Toward Implicit Reference in Dialog: A Survey of Methods and Data. In Proceedings of the 2nd Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 12th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 587–600, Online only. Association for Computational Linguistics.
Cite (Informal):
Toward Implicit Reference in Dialog: A Survey of Methods and Data (Vanderlyn et al., AACL-IJCNLP 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.aacl-main.45.pdf