Eunjung Cho
2026
NLP for Social Good: A Survey and Outlook of Challenges, Opportunities and Responsible Deployment
Antonia Karamolegkou | Angana Borah | Eunjung Cho | Sagnik Ray Choudhury | Martina Galletti | Pranav Gupta | Oana Ignat | Priyanka Kargupta | Neema Kotonya | Hemank Lamba | Sun-Joo Lee | Arushi Mangla | Ishani Mondal | Fatima Zahra Moudakir | Deniz Nazar | Poli Nemkova | Dina Pisarevskaya | Naquee Rizwan | Nazanin Sabri | Keenan Samway | Dominik Stammbach | Anna Steinberg Schulten | David Tomás | Steven R Wilson | Bowen Yi | Jessica H Zhu | Arkaitz Zubiaga | Anders Søgaard | Alexander Fraser | Zhijing Jin | Rada Mihalcea | Joel R. Tetreault | Daryna Dementieva
Proceedings of the 19th Conference of the European Chapter of the Association for Computational Linguistics (Volume 1: Long Papers)
Antonia Karamolegkou | Angana Borah | Eunjung Cho | Sagnik Ray Choudhury | Martina Galletti | Pranav Gupta | Oana Ignat | Priyanka Kargupta | Neema Kotonya | Hemank Lamba | Sun-Joo Lee | Arushi Mangla | Ishani Mondal | Fatima Zahra Moudakir | Deniz Nazar | Poli Nemkova | Dina Pisarevskaya | Naquee Rizwan | Nazanin Sabri | Keenan Samway | Dominik Stammbach | Anna Steinberg Schulten | David Tomás | Steven R Wilson | Bowen Yi | Jessica H Zhu | Arkaitz Zubiaga | Anders Søgaard | Alexander Fraser | Zhijing Jin | Rada Mihalcea | Joel R. Tetreault | Daryna Dementieva
Proceedings of the 19th Conference of the European Chapter of the Association for Computational Linguistics (Volume 1: Long Papers)
Natural language processing (NLP) now shapes many aspects of our world, yet its potential for positive social impact is underexplored. This paper surveys work in “NLP for Social Good" (NLP4SG) across nine domains relevant to global development and risk agendas, summarizing principal tasks and challenges. We analyze ACL Anthology trends, finding that inclusion and AI harms attract the most research, while domains such as poverty, peacebuilding, and environmental protection remain underexplored. Guided by our review, we outline opportunities for responsible and equitable NLP and conclude with a call for cross-disciplinary partnerships and human-centered approaches to ensure that future NLP technologies advance the public good.
2025
Hermit Kingdom Through the Lens of Multiple Perspectives: A Case Study of LLM Hallucination on North Korea
Eunjung Cho | Won Ik Cho | Soomin Seo
Proceedings of the 31st International Conference on Computational Linguistics
Eunjung Cho | Won Ik Cho | Soomin Seo
Proceedings of the 31st International Conference on Computational Linguistics
Hallucination in large language models (LLMs) remains a significant challenge for their safe deployment, particularly due to its potential to spread misinformation. Most existing solutions address this challenge by focusing on aligning the models with credible sources or by improving how models communicate their confidence (or lack thereof) in their outputs. While these measures may be effective in most contexts, they may fall short in scenarios requiring more nuanced approaches, especially in situations where access to accurate data is limited or determining credible sources is challenging. In this study, we take North Korea - a country characterised by an extreme lack of reliable sources and the prevalence of sensationalist falsehoods - as a case study. We explore and evaluate how some of the best-performing multilingual LLMs and specific language-based models generate information about North Korea in three languages spoken in countries with significant geo-political interests: English (United States, United Kingdom), Korean (South Korea), and Mandarin Chinese (China). Our findings reveal significant differences, suggesting that the choice of model and language can lead to vastly different understandings of North Korea, which has important implications given the global security challenges the country poses.
Modeling Motivated Reasoning in Law: Evaluating Strategic Role Conditioning in LLM Summarization
Eunjung Cho | Alexander Hoyle | Yoan Hermstrüwer
Proceedings of the Natural Legal Language Processing Workshop 2025
Eunjung Cho | Alexander Hoyle | Yoan Hermstrüwer
Proceedings of the Natural Legal Language Processing Workshop 2025
Large Language Models (LLMs) are increasingly used to generate user-tailored summaries, adapting outputs to specific stakeholders. In legal contexts, this raises important questions about motivated reasoning — how models strategically frame information to align with a stakeholder’s position within the legal system. Building on theories of legal realism and recent trends in legal practice, we investigate how LLMs respond to prompts conditioned on different legal roles (e.g., judges, prosecutors, attorneys) when summarizing judicial decisions. We introduce an evaluation framework grounded in legal fact and reasoning inclusion, also considering favorability towards stakeholders. Our results show that even when prompts include balancing instructions, models exhibit selective inclusion patterns that reflect role-consistent perspectives. These findings raise broader concerns about how similar alignment may emerge as LLMs begin to infer user roles from prior interactions or context, even without explicit role instructions. Our results underscore the need for role-aware evaluation of LLM summarization behavior in high-stakes legal settings.
2024
Aligning Large Language Models with Diverse Political Viewpoints
Dominik Stammbach | Philine Widmer | Eunjung Cho | Caglar Gulcehre | Elliott Ash
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
Dominik Stammbach | Philine Widmer | Eunjung Cho | Caglar Gulcehre | Elliott Ash
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
Large language models such as ChatGPT exhibit striking political biases. If users query them about political information, they often take a normative stance. To overcome this, we align LLMs with diverse political viewpoints from 100,000 comments written by candidates running for national parliament in Switzerland. Models aligned with this data can generate more accurate political viewpoints from Swiss parties, compared to commercial models such as ChatGPT. We also propose a procedure to generate balanced overviews summarizing multiple viewpoints using such models. The replication package contains all code and data.
Search
Fix author
Co-authors
- Dominik Stammbach 2
- Elliott Ash 1
- Angana Borah 1
- Won Ik Cho 1
- Sagnik Ray Choudhury 1
- Daryna Dementieva 1
- Alexander Fraser 1
- Martina Galletti 1
- Pranav Gupta 1
- Çağlar Gu̇lçehre 1
- Yoan Hermstrüwer 1
- Alexander Miserlis Hoyle 1
- Oana Ignat 1
- Zhijing Jin 1
- Antonia Karamolegkou 1
- Priyanka Kargupta 1
- Neema Kotonya 1
- Hemank Lamba 1
- Sun-Joo Lee 1
- Arushi Mangla 1
- Rada Mihalcea 1
- Ishani Mondal 1
- Fatima Zahra Moudakir 1
- Deniz Nazar 1
- Poli Nemkova 1
- Dina Pisarevskaya 1
- Naquee Rizwan 1
- Nazanin Sabri 1
- Keenan Samway 1
- Anna Steinberg Schulten 1
- Soomin Seo 1
- Anders Søgaard 1
- Joel Tetreault 1
- David Tomás 1
- Philine Widmer 1
- Steven R Wilson 1
- Bowen Yi 1
- Jessica H Zhu 1
- Arkaitz Zubiaga 1