Does Syntactic Knowledge in Multilingual Language Models Transfer Across Languages?

Prajit Dhar, Arianna Bisazza


Abstract
Recent work has shown that neural models can be successfully trained on multiple languages simultaneously. We investigate whether such models learn to share and exploit common syntactic knowledge among the languages on which they are trained. This extended abstract presents our preliminary results.
Anthology ID:
W18-5453
Volume:
Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP
Month:
November
Year:
2018
Address:
Brussels, Belgium
Editors:
Tal Linzen, Grzegorz Chrupała, Afra Alishahi
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
374–377
Language:
URL:
https://aclanthology.org/W18-5453
DOI:
10.18653/v1/W18-5453
Bibkey:
Cite (ACL):
Prajit Dhar and Arianna Bisazza. 2018. Does Syntactic Knowledge in Multilingual Language Models Transfer Across Languages?. In Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 374–377, Brussels, Belgium. Association for Computational Linguistics.
Cite (Informal):
Does Syntactic Knowledge in Multilingual Language Models Transfer Across Languages? (Dhar & Bisazza, EMNLP 2018)
Copy Citation:
PDF:
https://aclanthology.org/W18-5453.pdf