LionGuard: A Contextualized Moderation Classifier to Tackle Localized Unsafe Content

Jessica Foo, Shaun Khoo


Abstract
As large language models (LLMs) become increasingly prevalent in a wide variety of applications, concerns about the safety of their outputs have become more significant. Most efforts at safety-tuning or moderation today take on a predominantly Western-centric view of safety, especially for toxic, hateful, or violent speech. In this paper, we describe LionGuard, a Singapore-contextualized moderation classifier that can serve as guardrails against unsafe LLM usage. When assessed on Singlish data, LionGuard outperforms existing widely-used moderation APIs, which are not finetuned for the Singapore context, by at least 14% (binary) and up to 51% (multi-label). Our work highlights the benefits of localization for moderation classifiers and presents a practical and scalable approach for low-resource languages, particularly English-based creoles.
Anthology ID:
2025.coling-industry.60
Volume:
Proceedings of the 31st International Conference on Computational Linguistics: Industry Track
Month:
January
Year:
2025
Address:
Abu Dhabi, UAE
Editors:
Owen Rambow, Leo Wanner, Marianna Apidianaki, Hend Al-Khalifa, Barbara Di Eugenio, Steven Schockaert, Kareem Darwish, Apoorv Agarwal
Venue:
COLING
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
707–731
Language:
URL:
https://aclanthology.org/2025.coling-industry.60/
DOI:
Bibkey:
Cite (ACL):
Jessica Foo and Shaun Khoo. 2025. LionGuard: A Contextualized Moderation Classifier to Tackle Localized Unsafe Content. In Proceedings of the 31st International Conference on Computational Linguistics: Industry Track, pages 707–731, Abu Dhabi, UAE. Association for Computational Linguistics.
Cite (Informal):
LionGuard: A Contextualized Moderation Classifier to Tackle Localized Unsafe Content (Foo & Khoo, COLING 2025)
Copy Citation:
PDF:
https://aclanthology.org/2025.coling-industry.60.pdf