Rafael M. O. Cruz
2026
HARM: Learning Hate-Aware Reward Model for Evaluating Natural Language Explanations of Offensive Content
Lorenzo Puppi Vecchi | Alceu De Souza Britto Jr. | Emerson Cabrera Paraiso | Rafael M. O. Cruz
Findings of the Association for Computational Linguistics: EACL 2026
Lorenzo Puppi Vecchi | Alceu De Souza Britto Jr. | Emerson Cabrera Paraiso | Rafael M. O. Cruz
Findings of the Association for Computational Linguistics: EACL 2026
Explaining why content is hateful using natural language is crucial for fostering transparency in automated content moderation systems. However, evaluating the quality of such explanations remains an open challenge. General-purpose reward models (RMs), commonly used for scoring natural language outputs, are typically optimized for broad notions of safety. We argue that this optimization penalizes situations where references to stereotypes or offensive content are essential for explanations with higher explanatory fidelity. To address this gap, we introduce SBIC-Explain, a human-validated dataset of 370,788 LLM generated NLEs for offensive content, spanning three levels of human-annotated contextual richness: Tier 1: text-only, Tier 2: + classification-aware, and Tier 3: + semantics-informed. We hypothesize that as human-annotated context increases, explanations should lead to higher perceived explanations with higher explanatory fidelity. Yet, we find that existing RMs systematically assign lower scores to more contextually rich (and often more offensive) explanations, revealing a misalignment between model preferences and explanatory fidelity for this context. We propose HARM (Hate-Aware Reward Model), a RM that integrates interpretable signals to better align reward scores with the needs of hate speech explanation. HARM outperforms general-purpose baselines, improving NLE pair-wise preference. Available at: https://github.com/Lorenzo815/HARM.
2025
DRES: Fake news detection by dynamic representation and ensemble selection
Faramarz Farhangian | Leandro Augusto Ensina | George D C Cavalcanti | Rafael M. O. Cruz
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Faramarz Farhangian | Leandro Augusto Ensina | George D C Cavalcanti | Rafael M. O. Cruz
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
The rapid spread of information via social media has made text-based fake news detection critically important due to its societal impact. This paper presents a novel detection method called Dynamic Representation and Ensemble Selection (DRES) for identifying fake news based solely on text. DRES leverages instance hardness measures to estimate the classification difficulty for each news article across multiple textual feature representations. By dynamically selecting the textual representation and the most competent ensemble of classifiers for each instance, DRES significantly enhances prediction accuracy. Extensive experiments show that DRES achieves notable improvements over state-of-the-art methods, confirming the effectiveness of representation selection based on instance hardness and dynamic ensemble selection in boosting performance. Codes and data are available at: at:https://github.com/FFarhangian/FakeNewsDetection_DRES