How well do LSTM language models learn filler-gap dependencies?

Satoru Ozaki, Dan Yurovsky, Lori Levin


Anthology ID:
2022.scil-1.6
Volume:
Proceedings of the Society for Computation in Linguistics 2022
Month:
February
Year:
2022
Address:
online
Editors:
Allyson Ettinger, Tim Hunter, Brandon Prickett
Venue:
SCiL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
76–88
Language:
URL:
https://aclanthology.org/2022.scil-1.6
DOI:
Bibkey:
Cite (ACL):
Satoru Ozaki, Dan Yurovsky, and Lori Levin. 2022. How well do LSTM language models learn filler-gap dependencies?. In Proceedings of the Society for Computation in Linguistics 2022, pages 76–88, online. Association for Computational Linguistics.
Cite (Informal):
How well do LSTM language models learn filler-gap dependencies? (Ozaki et al., SCiL 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.scil-1.6.pdf
Code
 ikazos/scil2022-fgd
Data
Billion Word BenchmarkOne Billion Word BenchmarkPenn Treebank