Jonathan Rawski


2022

pdf bib
Benchmarking Compositionality with Formal Languages
Josef Valvoda | Naomi Saphra | Jonathan Rawski | Adina Williams | Ryan Cotterell
Proceedings of the 29th International Conference on Computational Linguistics

Recombining known primitive concepts into larger novel combinations is a quintessentially human cognitive capability. Whether large neural models in NLP acquire this ability while learning from data is an open question. In this paper, we look at this problem from the perspective of formal languages. We use deterministic finite-state transducers to make an unbounded number of datasets with controllable properties governing compositionality. By randomly sampling over many transducers, we explore which of their properties (number of states, alphabet size, number of transitions etc.) contribute to learnability of a compositional relation by a neural network. In general, we find that the models either learn the relations completely or not at all. The key is transition coverage, setting a soft learnability limit at 400 examples per transition.

2021

pdf bib
Strong generative capacity of morphological processes
Hossep Dolatian | Jonathan Rawski | Jeffrey Heinz
Proceedings of the Society for Computation in Linguistics 2021

2020

pdf bib
Probing RNN Encoder-Decoder Generalization of Subregular Functions using Reduplication
Max Nelson | Hossep Dolatian | Jonathan Rawski | Brandon Prickett
Proceedings of the Society for Computation in Linguistics 2020

pdf bib
Multi-Input Strictly Local Functions for Templatic Morphology
Hossep Dolatian | Jonathan Rawski
Proceedings of the Society for Computation in Linguistics 2020

pdf bib
Multi-Input Strictly Local Functions for Tonal Phonology
Jonathan Rawski | Hossep Dolatian
Proceedings of the Society for Computation in Linguistics 2020

2019

pdf bib
Learning with Partially Ordered Representations
Jane Chandlee | Remi Eyraud | Jeffrey Heinz | Adam Jardine | Jonathan Rawski
Proceedings of the 16th Meeting on the Mathematics of Language