How Furiously Can Colorless Green Ideas Sleep? Sentence Acceptability in Context

Jey Han Lau, Carlos Armendariz, Shalom Lappin, Matthew Purver, Chang Shu


Abstract
We study the influence of context on sentence acceptability. First we compare the acceptability ratings of sentences judged in isolation, with a relevant context, and with an irrelevant context. Our results show that context induces a cognitive load for humans, which compresses the distribution of ratings. Moreover, in relevant contexts we observe a discourse coherence effect that uniformly raises acceptability. Next, we test unidirectional and bidirectional language models in their ability to predict acceptability ratings. The bidirectional models show very promising results, with the best model achieving a new state-of-the-art for unsupervised acceptability prediction. The two sets of experiments provide insights into the cognitive aspects of sentence processing and central issues in the computational modeling of text and discourse.
Anthology ID:
2020.tacl-1.20
Volume:
Transactions of the Association for Computational Linguistics, Volume 8
Month:
Year:
2020
Address:
Cambridge, MA
Editors:
Mark Johnson, Brian Roark, Ani Nenkova
Venue:
TACL
SIG:
Publisher:
MIT Press
Note:
Pages:
296–310
Language:
URL:
https://aclanthology.org/2020.tacl-1.20
DOI:
10.1162/tacl_a_00315
Bibkey:
Cite (ACL):
Jey Han Lau, Carlos Armendariz, Shalom Lappin, Matthew Purver, and Chang Shu. 2020. How Furiously Can Colorless Green Ideas Sleep? Sentence Acceptability in Context. Transactions of the Association for Computational Linguistics, 8:296–310.
Cite (Informal):
How Furiously Can Colorless Green Ideas Sleep? Sentence Acceptability in Context (Lau et al., TACL 2020)
Copy Citation:
PDF:
https://aclanthology.org/2020.tacl-1.20.pdf