Charles Torres


2023

pdf bib
Evaluating a Century of Progress on the Cognitive Science of Adjective Ordering
William Dyer | Charles Torres | Gregory Scontras | Richard Futrell
Transactions of the Association for Computational Linguistics, Volume 11

The literature on adjective ordering abounds with proposals meant to account for why certain adjectives appear before others in multi-adjective strings (e.g., the small brown box). However, these proposals have been developed and tested primarily in isolation and based on English; few researchers have looked at the combined performance of multiple factors in the determination of adjective order, and few have evaluated predictors across multiple languages. The current work approaches both of these objectives by using technologies and datasets from natural language processing to look at the combined performance of existing proposals across 32 languages. Comparing this performance with both random and idealized baselines, we show that the literature on adjective ordering has made significant meaningful progress across its many decades, but there remains quite a gap yet to be explained.

pdf bib
Modeling island effects with probabilistic tier-based strictly local grammars over trees
Charles Torres | Kenneth Hanson | Thomas Graf | Connor Mayer
Proceedings of the Society for Computation in Linguistics 2023

pdf bib
Simpler neural networks prefer subregular languages
Charles Torres | Richard Futrell
Findings of the Association for Computational Linguistics: EMNLP 2023

We apply a continuous relaxation of L0 regularization (Louizos et al., 2017), which induces sparsity, to study the inductive biases of LSTMs. In particular, we are interested in the patterns of formal languages which are readily learned and expressed by LSTMs. Across a wide range of tests we find sparse LSTMs prefer subregular languages over regular languages and the strength of this preference increases as we increase the pressure for sparsity. Furthermore LSTMs which are trained on subregular languages have fewer non-zero parameters. We conjecture that this subregular bias in LSTMs is related to the cognitive bias for subregular language observed in human phonology which are both downstream of a simplicity bias in a suitable description language.