FAME: Feature-Based Adversarial Meta-Embeddings for Robust Input Representations

Lukas Lange, Heike Adel, Jannik Strötgen, Dietrich Klakow


Abstract
Combining several embeddings typically improves performance in downstream tasks as different embeddings encode different information. It has been shown that even models using embeddings from transformers still benefit from the inclusion of standard word embeddings. However, the combination of embeddings of different types and dimensions is challenging. As an alternative to attention-based meta-embeddings, we propose feature-based adversarial meta-embeddings (FAME) with an attention function that is guided by features reflecting word-specific properties, such as shape and frequency, and show that this is beneficial to handle subword-based embeddings. In addition, FAME uses adversarial training to optimize the mappings of differently-sized embeddings to the same space. We demonstrate that FAME works effectively across languages and domains for sequence labeling and sentence classification, in particular in low-resource settings. FAME sets the new state of the art for POS tagging in 27 languages, various NER settings and question classification in different domains.
Anthology ID:
2021.emnlp-main.660
Volume:
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2021
Address:
Online and Punta Cana, Dominican Republic
Editors:
Marie-Francine Moens, Xuanjing Huang, Lucia Specia, Scott Wen-tau Yih
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
8382–8395
Language:
URL:
https://aclanthology.org/2021.emnlp-main.660
DOI:
10.18653/v1/2021.emnlp-main.660
Bibkey:
Cite (ACL):
Lukas Lange, Heike Adel, Jannik Strötgen, and Dietrich Klakow. 2021. FAME: Feature-Based Adversarial Meta-Embeddings for Robust Input Representations. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 8382–8395, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Cite (Informal):
FAME: Feature-Based Adversarial Meta-Embeddings for Robust Input Representations (Lange et al., EMNLP 2021)
Copy Citation:
PDF:
https://aclanthology.org/2021.emnlp-main.660.pdf
Video:
 https://aclanthology.org/2021.emnlp-main.660.mp4
Code
 boschresearch/adversarial_meta_embeddings