Bhargav Srinivasa Desikan
2022
Signal in Noise: Exploring Meaning Encoded in Random Character Sequences with Character-Aware Language Models
Mark Chu
|
Bhargav Srinivasa Desikan
|
Ethan Nadler
|
Donald Ruggiero Lo Sardo
|
Elise Darragh-Ford
|
Douglas Guilbeault
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Natural language processing models learn word representations based on the distributional hypothesis, which asserts that word context (e.g., co-occurrence) correlates with meaning. We propose that n-grams composed of random character sequences, or garble, provide a novel context for studying word meaning both within and beyond extant language. In particular, randomly generated character n-grams lack meaning but contain primitive information based on the distribution of characters they contain. By studying the embeddings of a large corpus of garble, extant language, and pseudowords using CharacterBERT, we identify an axis in the model’s high-dimensional embedding space that separates these classes of n-grams. Furthermore, we show that this axis relates to structure within extant language, including word part-of-speech, morphology, and concept concreteness. Thus, in contrast to studies that are mainly limited to extant language, our work reveals that meaning and primitive information are intrinsically linked.
2020
comp-syn: Perceptually Grounded Word Embeddings with Color
Bhargav Srinivasa Desikan
|
Tasker Hull
|
Ethan Nadler
|
Douglas Guilbeault
|
Aabir Abubakar Kar
|
Mark Chu
|
Donald Ruggiero Lo Sardo
Proceedings of the 28th International Conference on Computational Linguistics
Popular approaches to natural language processing create word embeddings based on textual co-occurrence patterns, but often ignore embodied, sensory aspects of language. Here, we introduce the Python package comp-syn, which provides grounded word embeddings based on the perceptually uniform color distributions of Google Image search results. We demonstrate that comp-syn significantly enriches models of distributional semantics. In particular, we show that(1) comp-syn predicts human judgments of word concreteness with greater accuracy and in a more interpretable fashion than word2vec using low-dimensional word–color embeddings ,and (2) comp-syn performs comparably to word2vec on a metaphorical vs. literal word-pair classification task. comp-syn is open-source on PyPi and is compatible with mainstream machine-learning Python packages. Our package release includes word–color embeddings forover 40,000 English words, each associated with crowd-sourced word concreteness judgments.
Search
Co-authors
- Ethan Nadler 2
- Douglas Guilbeault 2
- Mark Chu 2
- Donald Ruggiero Lo Sardo 2
- Tasker Hull 1
- show all...