Attention-Focused Adversarial Training for Robust Temporal Reasoning

Lis Kanashiro Pereira


Abstract
We propose an enhanced adversarial training algorithm for fine-tuning transformer-based language models (i.e., RoBERTa) and apply it to the temporal reasoning task. Current adversarial training approaches for NLP add the adversarial perturbation only to the embedding layer, ignoring the other layers of the model, which might limit the generalization power of adversarial training. Instead, our algorithm searches for the best combination of layers to add the adversarial perturbation. We add the adversarial perturbation to multiple hidden states or attention representations of the model layers. Adding the perturbation to the attention representations performed best in our experiments. Our model can improve performance on several temporal reasoning benchmarks, and establishes new state-of-the-art results.
Anthology ID:
2022.lrec-1.800
Volume:
Proceedings of the Thirteenth Language Resources and Evaluation Conference
Month:
June
Year:
2022
Address:
Marseille, France
Editors:
Nicoletta Calzolari, Frédéric Béchet, Philippe Blache, Khalid Choukri, Christopher Cieri, Thierry Declerck, Sara Goggi, Hitoshi Isahara, Bente Maegaard, Joseph Mariani, Hélène Mazo, Jan Odijk, Stelios Piperidis
Venue:
LREC
SIG:
Publisher:
European Language Resources Association
Note:
Pages:
7352–7359
Language:
URL:
https://aclanthology.org/2022.lrec-1.800
DOI:
Bibkey:
Cite (ACL):
Lis Kanashiro Pereira. 2022. Attention-Focused Adversarial Training for Robust Temporal Reasoning. In Proceedings of the Thirteenth Language Resources and Evaluation Conference, pages 7352–7359, Marseille, France. European Language Resources Association.
Cite (Informal):
Attention-Focused Adversarial Training for Robust Temporal Reasoning (Kanashiro Pereira, LREC 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.lrec-1.800.pdf
Data
CosmosQAMATRESMC-TACO