Inducing Systematicity in Transformers by Attending to Structurally Quantized Embeddings

Yichen Jiang, Xiang Zhou, Mohit Bansal


Abstract
Transformers generalize to novel compositions of structures and entities after being trained on a complex dataset, but easily overfit on datasets of insufficient complexity. We observe that when the training set is sufficiently complex, the model encodes structurally equivalent sentences using a systematic attention pattern. Inspired by this observation, we propose SQ-Transformer (Structurally Quantized) that explicitly encourages systematicity in the embeddings and attention layers even with low-complexity data. At the embedding level, we introduce Structure-oriented Vector Quantization (SoVQ) to cluster word embeddings into several classes of structurally equivalent entities. At the attention level, we devise the Systematic Attention Layer (SAL) and an alternative, Systematically Regularized Layer (SRL) that operate on the quantized word embeddings so that sentences of the same structure are encoded with invariant or similar attention patterns. Empirically, we show SQ-Transformer achieves stronger compositional generalization than the vanilla Transformer on multiple low-complexity semantic parsing and machine translation datasets. In our analysis, we show SoVQ indeed learns a syntactically clustered embedding space, and SAL/SRL induces generalizable attention patterns, altogether leading to improved systematicity.
Anthology ID:
2024.acl-long.455
Volume:
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Month:
August
Year:
2024
Address:
Bangkok, Thailand
Editors:
Lun-Wei Ku, Andre Martins, Vivek Srikumar
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
8360–8383
Language:
URL:
https://aclanthology.org/2024.acl-long.455
DOI:
Bibkey:
Cite (ACL):
Yichen Jiang, Xiang Zhou, and Mohit Bansal. 2024. Inducing Systematicity in Transformers by Attending to Structurally Quantized Embeddings. In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 8360–8383, Bangkok, Thailand. Association for Computational Linguistics.
Cite (Informal):
Inducing Systematicity in Transformers by Attending to Structurally Quantized Embeddings (Jiang et al., ACL 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.acl-long.455.pdf