The Shape of Word Embeddings: Quantifying Non-Isometry with Topological Data Analysis

Ondřej Draganov, Steven Skiena


Abstract
Word embeddings represent language vocabularies as clouds of d-dimensional points. We investigate how information is conveyed by the general shape of these clouds, instead of representing the semantic meaning of each token. Specifically, we use the notion of persistent homology from topological data analysis (TDA) to measure the distances between language pairs from the shape of their unlabeled embeddings. These distances quantify the degree of non-isometry of the embeddings. To distinguish whether these differences are random training errors or capture real information about the languages, we use the computed distance matrices to construct language phylogenetic trees over 81 Indo-European languages. Careful evaluation shows that our reconstructed trees exhibit strong and statistically-significant similarities to the reference.
Anthology ID:
2024.findings-emnlp.705
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2024
Month:
November
Year:
2024
Address:
Miami, Florida, USA
Editors:
Yaser Al-Onaizan, Mohit Bansal, Yun-Nung Chen
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
12080–12099
Language:
URL:
https://aclanthology.org/2024.findings-emnlp.705
DOI:
Bibkey:
Cite (ACL):
Ondřej Draganov and Steven Skiena. 2024. The Shape of Word Embeddings: Quantifying Non-Isometry with Topological Data Analysis. In Findings of the Association for Computational Linguistics: EMNLP 2024, pages 12080–12099, Miami, Florida, USA. Association for Computational Linguistics.
Cite (Informal):
The Shape of Word Embeddings: Quantifying Non-Isometry with Topological Data Analysis (Draganov & Skiena, Findings 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.findings-emnlp.705.pdf
Data:
 2024.findings-emnlp.705.data.zip