Are there identifiable structural parts in the sentence embedding whole?

Vivi Nastase, Paola Merlo


Abstract
Sentence embeddings from transformer models encode much linguistic information in a fixed-length vector. We investigate whether structural information – specifically, information about chunks and their structural and semantic properties – can be detected in these representations. We use a dataset consisting of sentences with known chunk structure, and two linguistic intelligence datasets, whose solution relies on detecting chunks and their grammatical number, and respectively, their semantic roles. Through an approach involving indirect supervision, and through analyses of the performance on the tasks and of the internal representations built during learning, we show that information about chunks and their properties can be obtained from sentence embeddings.
Anthology ID:
2024.blackboxnlp-1.3
Volume:
Proceedings of the 7th BlackboxNLP Workshop: Analyzing and Interpreting Neural Networks for NLP
Month:
November
Year:
2024
Address:
Miami, Florida, US
Editors:
Yonatan Belinkov, Najoung Kim, Jaap Jumelet, Hosein Mohebbi, Aaron Mueller, Hanjie Chen
Venue:
BlackboxNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
23–42
Language:
URL:
https://aclanthology.org/2024.blackboxnlp-1.3
DOI:
Bibkey:
Cite (ACL):
Vivi Nastase and Paola Merlo. 2024. Are there identifiable structural parts in the sentence embedding whole?. In Proceedings of the 7th BlackboxNLP Workshop: Analyzing and Interpreting Neural Networks for NLP, pages 23–42, Miami, Florida, US. Association for Computational Linguistics.
Cite (Informal):
Are there identifiable structural parts in the sentence embedding whole? (Nastase & Merlo, BlackboxNLP 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.blackboxnlp-1.3.pdf