Will it Unblend?

Yuval Pinter, Cassandra L. Jacobs, Jacob Eisenstein


Abstract
Natural language processing systems often struggle with out-of-vocabulary (OOV) terms, which do not appear in training data. Blends, such as “innoventor”, are one particularly challenging class of OOV, as they are formed by fusing together two or more bases that relate to the intended meaning in unpredictable manners and degrees. In this work, we run experiments on a novel dataset of English OOV blends to quantify the difficulty of interpreting the meanings of blends by large-scale contextual language models such as BERT. We first show that BERT’s processing of these blends does not fully access the component meanings, leaving their contextual representations semantically impoverished. We find this is mostly due to the loss of characters resulting from blend formation. Then, we assess how easily different models can recognize the structure and recover the origin of blends, and find that context-aware embedding systems outperform character-level and context-free embeddings, although their results are still far from satisfactory.
Anthology ID:
2020.findings-emnlp.138
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2020
Month:
November
Year:
2020
Address:
Online
Editors:
Trevor Cohn, Yulan He, Yang Liu
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1525–1535
Language:
URL:
https://aclanthology.org/2020.findings-emnlp.138
DOI:
10.18653/v1/2020.findings-emnlp.138
Bibkey:
Cite (ACL):
Yuval Pinter, Cassandra L. Jacobs, and Jacob Eisenstein. 2020. Will it Unblend?. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 1525–1535, Online. Association for Computational Linguistics.
Cite (Informal):
Will it Unblend? (Pinter et al., Findings 2020)
Copy Citation:
PDF:
https://aclanthology.org/2020.findings-emnlp.138.pdf
Optional supplementary material:
 2020.findings-emnlp.138.OptionalSupplementaryMaterial.zip
Code
 yuvalpinter/unblend