Mohammed Erradi
2026
ART: Attention-Regularized Transformers for Multi-Modal Robustness
Mohammed Bouri | Mohammed Erradi | Adnane Saoud
Findings of the Association for Computational Linguistics: EACL 2026
Mohammed Bouri | Mohammed Erradi | Adnane Saoud
Findings of the Association for Computational Linguistics: EACL 2026
Transformers have become the standard in Natural Language Processing (NLP) and Computer Vision (CV) due to their strong performance, yet they remain highly sensitive to small input changes, often referred to as adversarial attacks, such as synonym swaps in text or pixel-level perturbations in images. These adversarial attacks can mislead predictions, while existing defenses are often domain-specific or lack formal robustness guarantees. We propose the Attention-Regularized Transformer (ART), a framework that enhances robustness across modalities. ART builds on the Attention Sensitivity Tensor (AST), which quantifies the effect of input perturbations on attention outputs. By incorporating an AST-based regularizer into training, ART encourages stable attention maps under adversarial perturbations in both text and image tasks. We evaluate ART on IMDB, QNLI, CIFAR-10, CIFAR-100, and Imagenette. Results show consistent robustness gains over strong baselines such as FreeLB and DSRM: up to +36.9% robust accuracy on IMDB and QNLI, and +5–25% on image benchmarks across multiple Vision Transformer (ViT) architectures, while maintaining or improving clean accuracy. ART is also highly efficient, training over 10× faster than adversarial methods on text and requiring only 1.25× the cost of standard training on images, compared to 1.5–5.5× for recent robust ViTs. Codes are available at [https://github.com/cliclab-um6p/ART](https://github.com/cliclab-um6p/ART)
2025
UOREX: Towards Uncertainty-Aware Open Relation Extraction
Rebii Jamal | Mounir Ourekouch | Mohammed Erradi
Proceedings of the 2025 Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)
Rebii Jamal | Mounir Ourekouch | Mohammed Erradi
Proceedings of the 2025 Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)
Open relation extraction (OpenRE) aims to identify relational facts within open-domain corpora without relying on predefined relation types. A significant limitation of current state-of-the-art OpenRE approaches is their inability to accurately self-assess their performance. Which is caused by the reliance on pseudo-labels, that treats all points within a cluster equally, regardless of their actual relative position according to the cluster center. This leads to models that are often overconfident in their incorrect predictions , significantly undermining their reliability. In this paper, we introduce an approach that addresses this challenge by effectively modeling a part of the epistemic uncertainty within OpenRE. Instead of using pseudo labels that mask uncertainty, our approach is built to train a classifier directly with the clustering distribution. Our experimental results across various datasets demonstrate that the suggested approach improves reliability of OpenRE by preventing overconfident errors. Furthermore we show that by improving the reliability of the predictions, UOREX operates more efficiently in a generative active learning context where an LLM is the oracle, doubling the performance gain compared to the state-of-the-art.