StrAE: Autoencoding for Pre-Trained Embeddings using Explicit Structure

Mattia Opper, Victor Prokhorov, Siddharth N


Abstract
This work presents StrAE: a Structured Autoencoder framework that through strict adherence to explicit structure, and use of a novel contrastive objective over tree-structured representations, enables effective learning of multi-level representations. Through comparison over different forms of structure, we verify that our results are directly attributable to the informativeness of the structure provided as input, and show that this is not the case for existing tree models. We then further extend StrAE to allow the model to define its own compositions using a simple localised-merge algorithm. This variant, called Self-StrAE, outperforms baselines that don’t involve explicit hierarchical compositions, and is comparable to models given informative structure (e.g. constituency parses). Our experiments are conducted in a data-constrained (circa 10M tokens) setting to help tease apart the contribution of the inductive bias to effective learning. However, we find that this framework can be robust to scale, and when extended to a much larger dataset (circa 100M tokens), our 430 parameter model performs comparably to a 6-layer RoBERTa many orders of magnitude larger in size. Our findings support the utility of incorporating explicit composition as an inductive bias for effective representation learning.
Anthology ID:
2023.emnlp-main.469
Volume:
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing
Month:
December
Year:
2023
Address:
Singapore
Editors:
Houda Bouamor, Juan Pino, Kalika Bali
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
7544–7560
Language:
URL:
https://aclanthology.org/2023.emnlp-main.469
DOI:
10.18653/v1/2023.emnlp-main.469
Bibkey:
Cite (ACL):
Mattia Opper, Victor Prokhorov, and Siddharth N. 2023. StrAE: Autoencoding for Pre-Trained Embeddings using Explicit Structure. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 7544–7560, Singapore. Association for Computational Linguistics.
Cite (Informal):
StrAE: Autoencoding for Pre-Trained Embeddings using Explicit Structure (Opper et al., EMNLP 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.emnlp-main.469.pdf
Video:
 https://aclanthology.org/2023.emnlp-main.469.mp4