Connecting degree and polarity: An artificial language learning study

Lisa Bylinina, Alexey Tikhonov, Ekaterina Garmash


Abstract
We investigate a new linguistic generalisation in pre-trained language models (taking BERT Devlin et al. 2019 as a case study). We focus on degree modifiers (expressions like slightly, very, rather, extremely) and test the hypothesis that the degree expressed by a modifier (low, medium or high degree) is related to the modifier’s sensitivity to sentence polarity (whether it shows preference for affirmative or negative sentences or neither). To probe this connection, we apply the Artificial Language Learning experimental paradigm from psycholinguistics to a neural language model. Our experimental results suggest that BERT generalizes in line with existing linguistic observations that relate de- gree semantics to polarity sensitivity, including the main one: low degree semantics is associated with preference towards positive polarity.
Anthology ID:
2023.emnlp-main.938
Volume:
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing
Month:
December
Year:
2023
Address:
Singapore
Editors:
Houda Bouamor, Juan Pino, Kalika Bali
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
15168–15177
Language:
URL:
https://aclanthology.org/2023.emnlp-main.938
DOI:
10.18653/v1/2023.emnlp-main.938
Bibkey:
Cite (ACL):
Lisa Bylinina, Alexey Tikhonov, and Ekaterina Garmash. 2023. Connecting degree and polarity: An artificial language learning study. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 15168–15177, Singapore. Association for Computational Linguistics.
Cite (Informal):
Connecting degree and polarity: An artificial language learning study (Bylinina et al., EMNLP 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.emnlp-main.938.pdf
Video:
 https://aclanthology.org/2023.emnlp-main.938.mp4