Deriving Contextualised Semantic Features from BERT (and Other Transformer Model) Embeddings

Jacob Turton, Robert Elliott Smith, David Vinson


Abstract
Models based on the transformer architecture, such as BERT, have marked a crucial step forward in the field of Natural Language Processing. Importantly, they allow the creation of word embeddings that capture important semantic information about words in context. However, as single entities, these embeddings are difficult to interpret and the models used to create them have been described as opaque. Binder and colleagues proposed an intuitive embedding space where each dimension is based on one of 65 core semantic features. Unfortunately, the space only exists for a small data-set of 535 words, limiting its uses. Previous work (Utsumi, 2018, 2020; Turton et al., 2020) has shown that Binder features can be derived from static embeddings and successfully extrapolated to a large new vocabulary. Taking the next step, this paper demonstrates that Binder features can be derived from the BERT embedding space. This provides two things; (1) semantic feature values derived from contextualised word embeddings and (2) insights into how semantic features are represented across the different layers of the BERT model.
Anthology ID:
2021.repl4nlp-1.26
Volume:
Proceedings of the 6th Workshop on Representation Learning for NLP (RepL4NLP-2021)
Month:
August
Year:
2021
Address:
Online
Editors:
Anna Rogers, Iacer Calixto, Ivan Vulić, Naomi Saphra, Nora Kassner, Oana-Maria Camburu, Trapit Bansal, Vered Shwartz
Venue:
RepL4NLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
248–262
Language:
URL:
https://aclanthology.org/2021.repl4nlp-1.26
DOI:
10.18653/v1/2021.repl4nlp-1.26
Bibkey:
Cite (ACL):
Jacob Turton, Robert Elliott Smith, and David Vinson. 2021. Deriving Contextualised Semantic Features from BERT (and Other Transformer Model) Embeddings. In Proceedings of the 6th Workshop on Representation Learning for NLP (RepL4NLP-2021), pages 248–262, Online. Association for Computational Linguistics.
Cite (Informal):
Deriving Contextualised Semantic Features from BERT (and Other Transformer Model) Embeddings (Turton et al., RepL4NLP 2021)
Copy Citation:
PDF:
https://aclanthology.org/2021.repl4nlp-1.26.pdf
Data
Billion Word BenchmarkOne Billion Word BenchmarkWiC