Terry Regier
2024
American Sign Language Handshapes Reflect Pressures for Communicative Efficiency
Kayo Yin
|
Terry Regier
|
Dan Klein
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Communicative efficiency is a key topic in linguistics and cognitive psychology, with many studies demonstrating how the pressure to communicate with minimal effort guides the form of natural language. However, this phenomenon is rarely explored in signed languages. This paper shows how handshapes in American Sign Language (ASL) reflect these efficiency pressures and provides new evidence of communicative efficiency in the visual-gestural modality.We focus on hand configurations in native ASL signs and signs borrowed from English to compare efficiency pressures from both ASL and English usage. First, we develop new methodologies to quantify the articulatory effort needed to produce handshapes and the perceptual effort required to recognize them. Then, we analyze correlations between communicative effort and usage statistics in ASL or English. Our findings reveal that frequent ASL handshapes are easier to produce and that pressures for communicative efficiency mostly come from ASL usage, rather than from English lexical borrowing.
2020
Semantic categories of artifacts and animals reflect efficient coding
Noga Zaslavsky
|
Terry Regier
|
Naftali Tishby
|
Charles Kemp
Proceedings of the Society for Computation in Linguistics 2020
2018
Probing sentence embeddings for structure-dependent tense
Geoff Bacon
|
Terry Regier
Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP
Learning universal sentence representations which accurately model sentential semantic content is a current goal of natural language processing research. A prominent and successful approach is to train recurrent neural networks (RNNs) to encode sentences into fixed length vectors. Many core linguistic phenomena that one would like to model in universal sentence representations depend on syntactic structure. Despite the fact that RNNs do not have explicit syntactic structural representations, there is some evidence that RNNs can approximate such structure-dependent phenomena under certain conditions, in addition to their widespread success in practical tasks. In this work, we assess RNNs’ ability to learn the structure-dependent phenomenon of main clause tense.
1991
Learning Perceptually-Grounded Semantics in the L₀ Project
Terry Regier
29th Annual Meeting of the Association for Computational Linguistics
Search
Co-authors
- Noga Zaslavsky 1
- Naftali Tishby 1
- Charles Kemp 1
- Geoff Bacon 1
- Kayo Yin 1
- show all...