ACL Anthology
News
(current)
FAQ
(current)
Corrections
(current)
Submissions
(current)
GitHub
Rui
Chaves
2020
pdf
bib
What Don’t
RNN
Language Models Learn About Filler-Gap Dependencies?
Rui Chaves
Proceedings of the Society for Computation in Linguistics 2020
pdf
bib
Assessing the ability of Transformer-based Neural Models to represent structurally unbounded dependencies
Jillian Da Costa
|
Rui Chaves
Proceedings of the Society for Computation in Linguistics 2020
Search
Co-authors
Jillian Da Costa
1
Venues
scil
2
Fix author