A State-Vector Framework for Dataset Effects

Esmat Sahak, Zining Zhu, Frank Rudzicz


Abstract
The impressive success of recent deep neural network (DNN)-based systems is significantly influenced by the high-quality datasets used in training. However, the effects of the datasets, especially how they interact with each other, remain underexplored. We propose a state-vector framework to enable rigorous studies in this direction. This framework uses idealized probing test results as the bases of a vector space. This framework allows us to quantify the effects of both standalone and interacting datasets. We show that the significant effects of some commonly-used language understanding datasets are characteristic and are concentrated on a few linguistic dimensions. Additionally, we observe some “spill-over” effects: the datasets could impact the models along dimensions that may seem unrelated to the intended tasks. Our state-vector framework paves the way for a systematic understanding of the dataset effects, a crucial component in responsible and robust model development.
Anthology ID:
2023.emnlp-main.942
Volume:
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing
Month:
December
Year:
2023
Address:
Singapore
Editors:
Houda Bouamor, Juan Pino, Kalika Bali
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
15231–15245
Language:
URL:
https://aclanthology.org/2023.emnlp-main.942
DOI:
10.18653/v1/2023.emnlp-main.942
Bibkey:
Cite (ACL):
Esmat Sahak, Zining Zhu, and Frank Rudzicz. 2023. A State-Vector Framework for Dataset Effects. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 15231–15245, Singapore. Association for Computational Linguistics.
Cite (Informal):
A State-Vector Framework for Dataset Effects (Sahak et al., EMNLP 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.emnlp-main.942.pdf
Video:
 https://aclanthology.org/2023.emnlp-main.942.mp4