Zeno Jonke
2026
Multilingual Self-Taught Faithfulness Evaluators
Carlo Alfano | Aymen Al Marjani | Zeno Jonke | Amin Mantrach | Saab Mansour | Marcello Federico
Findings of the Association for Computational Linguistics: EACL 2026
Carlo Alfano | Aymen Al Marjani | Zeno Jonke | Amin Mantrach | Saab Mansour | Marcello Federico
Findings of the Association for Computational Linguistics: EACL 2026
The growing use of large language models (LLMs) has increased the need for automatic evaluation systems, particularly to address the challenge of information hallucination. Although existing faithfulness evaluation approaches have shown promise, they are predominantly English-focused and often require expensive human-labeled training data for fine-tuning specialized models. As LLMs see increased adoption in multilingual contexts, there is a need for accurate faithfulness evaluators that can operate across languages without extensive labeled data. This paper presents STEMF (Self-Taught Evaluators for Multilingual Faithfulness), a framework that learns exclusively from synthetic multilingual data while leveraging cross-lingual transfer learning. Through experiments comparing language-specific and mixed-language fine-tuning approaches, we demonstrate a consistent relationship between an LLM’s general language capabilities and its performance in language-specific evaluation tasks. Our framework shows improvements over existing baselines, including state-of-the-art English evaluators and machine translation-based approaches.