@inproceedings{king-etal-2025-hearts,
title = "{HEARTS}: A Holistic Framework for Explainable, Sustainable and Robust Text Stereotype Detection",
author = "King, Theo and
Wu, Zekun and
Koshiyama, Adriano and
Kazim, Emre and
Treleaven, Philip Colin",
editor = "Inui, Kentaro and
Sakti, Sakriani and
Wang, Haofen and
Wong, Derek F. and
Bhattacharyya, Pushpak and
Banerjee, Biplab and
Ekbal, Asif and
Chakraborty, Tanmoy and
Singh, Dhirendra Pratap",
booktitle = "Proceedings of the 14th International Joint Conference on Natural Language Processing and the 4th Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics",
month = dec,
year = "2025",
address = "Mumbai, India",
publisher = "The Asian Federation of Natural Language Processing and The Association for Computational Linguistics",
url = "https://aclanthology.org/2025.ijcnlp-long.1/",
pages = "1--18",
ISBN = "979-8-89176-298-5",
abstract = "A stereotype is a generalised claim about a social group. Such claims change with culture and context and are often phrased in everyday language, which makes them hard to detect: the State of the Art Large Language Models (LLMs) reach only 68{\%} macro-F1 on the yes/no task ``does this sentence contain a stereotype?''. We present HEARTS, a Holistic framework for Explainable, sustAinable and Robust Text Stereotype detection that brings together NLP and social-science. The framework is built on the Expanded Multi-Grain Stereotype Dataset (EMGSD), 57201 English sentences that cover gender, profession, nationality, race, religion and LGBTQ+ topics, adding 10{\%} more data for under-represented groups while keeping high annotator agreement ($\kappa = 0.82$). Fine-tuning the lightweight ALBERT-v2 model on EMGSD raises binary detection scores to 81.5{\%} macro-F1, matching full BERT while producing 200$\times$ less CO$_2$. For Explainability, we blend SHAP and LIME token level scores and introduce a confidence measure that increases when the model is correct ($\rho = 0.18$). We then use HEARTS to assess 16 SOTA LLMs on 1050 neutral prompts each for stereotype propagation: stereotype rates fall by 23{\%} between model generations, yet clear differences remain across model families (LLaMA $>$ Gemini $>$ GPT $>$ Claude). HEARTS thus supplies a practical, low-carbon and interpretable toolkit for measuring stereotype bias in language."
}<?xml version="1.0" encoding="UTF-8"?>
<modsCollection xmlns="http://www.loc.gov/mods/v3">
<mods ID="king-etal-2025-hearts">
<titleInfo>
<title>HEARTS: A Holistic Framework for Explainable, Sustainable and Robust Text Stereotype Detection</title>
</titleInfo>
<name type="personal">
<namePart type="given">Theo</namePart>
<namePart type="family">King</namePart>
<role>
<roleTerm authority="marcrelator" type="text">author</roleTerm>
</role>
</name>
<name type="personal">
<namePart type="given">Zekun</namePart>
<namePart type="family">Wu</namePart>
<role>
<roleTerm authority="marcrelator" type="text">author</roleTerm>
</role>
</name>
<name type="personal">
<namePart type="given">Adriano</namePart>
<namePart type="family">Koshiyama</namePart>
<role>
<roleTerm authority="marcrelator" type="text">author</roleTerm>
</role>
</name>
<name type="personal">
<namePart type="given">Emre</namePart>
<namePart type="family">Kazim</namePart>
<role>
<roleTerm authority="marcrelator" type="text">author</roleTerm>
</role>
</name>
<name type="personal">
<namePart type="given">Philip</namePart>
<namePart type="given">Colin</namePart>
<namePart type="family">Treleaven</namePart>
<role>
<roleTerm authority="marcrelator" type="text">author</roleTerm>
</role>
</name>
<originInfo>
<dateIssued>2025-12</dateIssued>
</originInfo>
<typeOfResource>text</typeOfResource>
<relatedItem type="host">
<titleInfo>
<title>Proceedings of the 14th International Joint Conference on Natural Language Processing and the 4th Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics</title>
</titleInfo>
<name type="personal">
<namePart type="given">Kentaro</namePart>
<namePart type="family">Inui</namePart>
<role>
<roleTerm authority="marcrelator" type="text">editor</roleTerm>
</role>
</name>
<name type="personal">
<namePart type="given">Sakriani</namePart>
<namePart type="family">Sakti</namePart>
<role>
<roleTerm authority="marcrelator" type="text">editor</roleTerm>
</role>
</name>
<name type="personal">
<namePart type="given">Haofen</namePart>
<namePart type="family">Wang</namePart>
<role>
<roleTerm authority="marcrelator" type="text">editor</roleTerm>
</role>
</name>
<name type="personal">
<namePart type="given">Derek</namePart>
<namePart type="given">F</namePart>
<namePart type="family">Wong</namePart>
<role>
<roleTerm authority="marcrelator" type="text">editor</roleTerm>
</role>
</name>
<name type="personal">
<namePart type="given">Pushpak</namePart>
<namePart type="family">Bhattacharyya</namePart>
<role>
<roleTerm authority="marcrelator" type="text">editor</roleTerm>
</role>
</name>
<name type="personal">
<namePart type="given">Biplab</namePart>
<namePart type="family">Banerjee</namePart>
<role>
<roleTerm authority="marcrelator" type="text">editor</roleTerm>
</role>
</name>
<name type="personal">
<namePart type="given">Asif</namePart>
<namePart type="family">Ekbal</namePart>
<role>
<roleTerm authority="marcrelator" type="text">editor</roleTerm>
</role>
</name>
<name type="personal">
<namePart type="given">Tanmoy</namePart>
<namePart type="family">Chakraborty</namePart>
<role>
<roleTerm authority="marcrelator" type="text">editor</roleTerm>
</role>
</name>
<name type="personal">
<namePart type="given">Dhirendra</namePart>
<namePart type="given">Pratap</namePart>
<namePart type="family">Singh</namePart>
<role>
<roleTerm authority="marcrelator" type="text">editor</roleTerm>
</role>
</name>
<originInfo>
<publisher>The Asian Federation of Natural Language Processing and The Association for Computational Linguistics</publisher>
<place>
<placeTerm type="text">Mumbai, India</placeTerm>
</place>
</originInfo>
<genre authority="marcgt">conference publication</genre>
<identifier type="isbn">979-8-89176-298-5</identifier>
</relatedItem>
<abstract>A stereotype is a generalised claim about a social group. Such claims change with culture and context and are often phrased in everyday language, which makes them hard to detect: the State of the Art Large Language Models (LLMs) reach only 68% macro-F1 on the yes/no task “does this sentence contain a stereotype?”. We present HEARTS, a Holistic framework for Explainable, sustAinable and Robust Text Stereotype detection that brings together NLP and social-science. The framework is built on the Expanded Multi-Grain Stereotype Dataset (EMGSD), 57201 English sentences that cover gender, profession, nationality, race, religion and LGBTQ+ topics, adding 10% more data for under-represented groups while keeping high annotator agreement (ąppa = 0.82). Fine-tuning the lightweight ALBERT-v2 model on EMGSD raises binary detection scores to 81.5% macro-F1, matching full BERT while producing 200\times less CO₂. For Explainability, we blend SHAP and LIME token level scores and introduce a confidence measure that increases when the model is correct (ρ = 0.18). We then use HEARTS to assess 16 SOTA LLMs on 1050 neutral prompts each for stereotype propagation: stereotype rates fall by 23% between model generations, yet clear differences remain across model families (LLaMA > Gemini > GPT > Claude). HEARTS thus supplies a practical, low-carbon and interpretable toolkit for measuring stereotype bias in language.</abstract>
<identifier type="citekey">king-etal-2025-hearts</identifier>
<location>
<url>https://aclanthology.org/2025.ijcnlp-long.1/</url>
</location>
<part>
<date>2025-12</date>
<extent unit="page">
<start>1</start>
<end>18</end>
</extent>
</part>
</mods>
</modsCollection>
%0 Conference Proceedings
%T HEARTS: A Holistic Framework for Explainable, Sustainable and Robust Text Stereotype Detection
%A King, Theo
%A Wu, Zekun
%A Koshiyama, Adriano
%A Kazim, Emre
%A Treleaven, Philip Colin
%Y Inui, Kentaro
%Y Sakti, Sakriani
%Y Wang, Haofen
%Y Wong, Derek F.
%Y Bhattacharyya, Pushpak
%Y Banerjee, Biplab
%Y Ekbal, Asif
%Y Chakraborty, Tanmoy
%Y Singh, Dhirendra Pratap
%S Proceedings of the 14th International Joint Conference on Natural Language Processing and the 4th Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics
%D 2025
%8 December
%I The Asian Federation of Natural Language Processing and The Association for Computational Linguistics
%C Mumbai, India
%@ 979-8-89176-298-5
%F king-etal-2025-hearts
%X A stereotype is a generalised claim about a social group. Such claims change with culture and context and are often phrased in everyday language, which makes them hard to detect: the State of the Art Large Language Models (LLMs) reach only 68% macro-F1 on the yes/no task “does this sentence contain a stereotype?”. We present HEARTS, a Holistic framework for Explainable, sustAinable and Robust Text Stereotype detection that brings together NLP and social-science. The framework is built on the Expanded Multi-Grain Stereotype Dataset (EMGSD), 57201 English sentences that cover gender, profession, nationality, race, religion and LGBTQ+ topics, adding 10% more data for under-represented groups while keeping high annotator agreement (ąppa = 0.82). Fine-tuning the lightweight ALBERT-v2 model on EMGSD raises binary detection scores to 81.5% macro-F1, matching full BERT while producing 200\times less CO₂. For Explainability, we blend SHAP and LIME token level scores and introduce a confidence measure that increases when the model is correct (ρ = 0.18). We then use HEARTS to assess 16 SOTA LLMs on 1050 neutral prompts each for stereotype propagation: stereotype rates fall by 23% between model generations, yet clear differences remain across model families (LLaMA > Gemini > GPT > Claude). HEARTS thus supplies a practical, low-carbon and interpretable toolkit for measuring stereotype bias in language.
%U https://aclanthology.org/2025.ijcnlp-long.1/
%P 1-18
Markdown (Informal)
[HEARTS: A Holistic Framework for Explainable, Sustainable and Robust Text Stereotype Detection](https://aclanthology.org/2025.ijcnlp-long.1/) (King et al., IJCNLP-AACL 2025)
ACL
- Theo King, Zekun Wu, Adriano Koshiyama, Emre Kazim, and Philip Colin Treleaven. 2025. HEARTS: A Holistic Framework for Explainable, Sustainable and Robust Text Stereotype Detection. In Proceedings of the 14th International Joint Conference on Natural Language Processing and the 4th Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics, pages 1–18, Mumbai, India. The Asian Federation of Natural Language Processing and The Association for Computational Linguistics.