Lynnette Hui Xian Ng
2025
Humanizing Machines: Rethinking LLM Anthropomorphism Through a Multi-Level Framework of Design
Yunze Xiao | Lynnette Hui Xian Ng | Jiarui Liu | Mona T. Diab
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Yunze Xiao | Lynnette Hui Xian Ng | Jiarui Liu | Mona T. Diab
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Large Language Models (LLMs) increasingly exhibit anthropomorphism characteristics – human-like qualities portrayed across their outlook, language, behavior, and reasoning functions. Such characteristics enable more intuitive and engaging human-AI interactions. However, current research on anthropomorphism remains predominantly risk-focused, emphasizing over-trust and user deception while offering limited design guidance. We argue that anthropomorphism should instead be treated as a concept of design that can be intentionally tuned to support user goals. Drawing from multiple disciplines, we propose that the anthropomorphism of an LLM-based artifact should reflect the interaction between artifact designers and interpreters. This interaction is facilitated by cues embedded in the artifact by the designers and the (cognitive) responses of the interpreters to the cues. Cues are categorized into four dimensions: perceptive, linguistic, behavioral, and cognitive. By analyzing the manifestation and effectiveness of each cue, we provide a unified taxonomy with actionable levers for practitioners. Consequently, we advocate for function-oriented evaluations of anthropomorphic design.
Who Speaks Matters: Analysing the Influence of the Speaker’s Linguistic Identity on Hate Classification
Ananya Malik | Kartik Sharma | Shaily Bhatt | Lynnette Hui Xian Ng
Findings of the Association for Computational Linguistics: EMNLP 2025
Ananya Malik | Kartik Sharma | Shaily Bhatt | Lynnette Hui Xian Ng
Findings of the Association for Computational Linguistics: EMNLP 2025
Large Language Models (LLMs) offer a lucrative promise for scalable content moderation, including hate speech detection. However, they are also known to be brittle and biased against marginalised communities and dialects. This requires their applications to high-stakes tasks like hate speech detection to be critically scrutinized. In this work, we investigate the robustness of hate speech classification using LLMs particularly when explicit and implicit markers of the speaker’s ethnicity are injected into the input. For explicit markers, we inject a phrase that mentions the speaker’s linguistic identity. For the implicit markers, we inject dialectal features. By analysing how frequently model outputs flip in the presence of these markers, we reveal varying degrees of brittleness across 3 LLMs and 1 LM and 5 linguistic identities. We find that the presence of implicit dialect markers in inputs causes model outputs to flip more than the presence of explicit markers. Further, the percentage of flips varies across ethnicities. Finally, we find that larger models are more robust. Our findings indicate the need for exercising caution in deploying LLMs for high-stakes tasks like hate speech detection.
Crowdsource, Crawl, or Generate? Creating SEA-VL, a Multicultural Vision-Language Dataset for Southeast Asia
Samuel Cahyawijaya | Holy Lovenia | Joel Ruben Antony Moniz | Tack Hwa Wong | Mohammad Rifqi Farhansyah | Thant Thiri Maung | Frederikus Hudi | David Anugraha | Muhammad Ravi Shulthan Habibi | Muhammad Reza Qorib | Amit Agarwal | Joseph Marvin Imperial | Hitesh Laxmichand Patel | Vicky Feliren | Bahrul Ilmi Nasution | Manuel Antonio Rufino | Genta Indra Winata | Rian Adam Rajagede | Carlos Rafael Catalan | Mohamed Fazli Mohamed Imam | Priyaranjan Pattnayak | Salsabila Zahirah Pranida | Kevin Pratama | Yeshil Bangera | Adisai Na-Thalang | Patricia Nicole Monderin | Yueqi Song | Christian Simon | Lynnette Hui Xian Ng | Richardy Lobo Sapan | Taki Hasan Rafi | Bin Wang | Supryadi | Kanyakorn Veerakanjana | Piyalitt Ittichaiwong | Matthew Theodore Roque | Karissa Vincentio | Takdanai Kreangphet | Phakphum Artkaew | Kadek Hendrawan Palgunadi | Yanzhi Yu | Rochana Prih Hastuti | William Nixon | Mithil Bangera | Adrian Xuan Wei Lim | Aye Hninn Khine | Hanif Muhammad Zhafran | Teddy Ferdinan | Audra Aurora Izzani | Ayushman Singh | Evan Evan | Jauza Akbar Krito | Michael Anugraha | Fenal Ashokbhai Ilasariya | Haochen Li | John Amadeo Daniswara | Filbert Aurelian Tjiaranata | Eryawan Presma Yulianrifat | Can Udomcharoenchaikit | Fadil Risdian Ansori | Mahardika Krisna Ihsani | Giang Nguyen | Anab Maulana Barik | Dan John Velasco | Rifo Ahmad Genadi | Saptarshi Saha | Chengwei Wei | Isaiah Edri W. Flores | Kenneth Chen Ko Han | Anjela Gail D. Santos | Wan Shen Lim | Kaung Si Phyo | Tim Santos | Meisyarah Dwiastuti | Jiayun Luo | Jan Christian Blaise Cruz | Ming Shan Hee | Ikhlasul Akmal Hanif | M.Alif Al Hakim | Muhammad Rizky Sya’ban | Kun Kerdthaisong | Lester James Validad Miranda | Fajri Koto | Tirana Noor Fatyanosa | Alham Fikri Aji | Jostin Jerico Rosal | Jun Kevin | Robert Wijaya | Onno P. Kampman | Ruochen Zhang | Börje F. Karlsson | Peerat Limkonchotiwat
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Samuel Cahyawijaya | Holy Lovenia | Joel Ruben Antony Moniz | Tack Hwa Wong | Mohammad Rifqi Farhansyah | Thant Thiri Maung | Frederikus Hudi | David Anugraha | Muhammad Ravi Shulthan Habibi | Muhammad Reza Qorib | Amit Agarwal | Joseph Marvin Imperial | Hitesh Laxmichand Patel | Vicky Feliren | Bahrul Ilmi Nasution | Manuel Antonio Rufino | Genta Indra Winata | Rian Adam Rajagede | Carlos Rafael Catalan | Mohamed Fazli Mohamed Imam | Priyaranjan Pattnayak | Salsabila Zahirah Pranida | Kevin Pratama | Yeshil Bangera | Adisai Na-Thalang | Patricia Nicole Monderin | Yueqi Song | Christian Simon | Lynnette Hui Xian Ng | Richardy Lobo Sapan | Taki Hasan Rafi | Bin Wang | Supryadi | Kanyakorn Veerakanjana | Piyalitt Ittichaiwong | Matthew Theodore Roque | Karissa Vincentio | Takdanai Kreangphet | Phakphum Artkaew | Kadek Hendrawan Palgunadi | Yanzhi Yu | Rochana Prih Hastuti | William Nixon | Mithil Bangera | Adrian Xuan Wei Lim | Aye Hninn Khine | Hanif Muhammad Zhafran | Teddy Ferdinan | Audra Aurora Izzani | Ayushman Singh | Evan Evan | Jauza Akbar Krito | Michael Anugraha | Fenal Ashokbhai Ilasariya | Haochen Li | John Amadeo Daniswara | Filbert Aurelian Tjiaranata | Eryawan Presma Yulianrifat | Can Udomcharoenchaikit | Fadil Risdian Ansori | Mahardika Krisna Ihsani | Giang Nguyen | Anab Maulana Barik | Dan John Velasco | Rifo Ahmad Genadi | Saptarshi Saha | Chengwei Wei | Isaiah Edri W. Flores | Kenneth Chen Ko Han | Anjela Gail D. Santos | Wan Shen Lim | Kaung Si Phyo | Tim Santos | Meisyarah Dwiastuti | Jiayun Luo | Jan Christian Blaise Cruz | Ming Shan Hee | Ikhlasul Akmal Hanif | M.Alif Al Hakim | Muhammad Rizky Sya’ban | Kun Kerdthaisong | Lester James Validad Miranda | Fajri Koto | Tirana Noor Fatyanosa | Alham Fikri Aji | Jostin Jerico Rosal | Jun Kevin | Robert Wijaya | Onno P. Kampman | Ruochen Zhang | Börje F. Karlsson | Peerat Limkonchotiwat
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Despite Southeast Asia’s (SEA) extraordinary linguistic and cultural diversity, the region remains significantly underrepresented in vision-language (VL) research, resulting in AI models that inadequately capture SEA cultural nuances. To fill this gap, we present SEA-VL, an open-source initiative dedicated to developing culturally relevant high-quality datasets for SEA languages. By involving contributors from SEA countries, SEA-VL ensures better cultural relevance and diversity, fostering greater inclusivity of underrepresented languages and cultural depictions in VL research. Our methodology employed three approaches: community-driven crowdsourcing with SEA contributors, automated image crawling, and synthetic image generation. We evaluated each method’s effectiveness in capturing cultural relevance. We found that image crawling achieves approximately ~85% cultural relevance while being more cost- and time-efficient than crowdsourcing, whereas synthetic image generation failed to accurately reflect SEA cultural nuances and contexts. Collectively, we gathered 1.28 million SEA culturally relevant images, more than 50 times larger than other existing datasets. This work bridges the representation gap in SEA, establishes a foundation for developing culturally aware AI systems for this region, and provides a replicable framework for addressing representation gaps in other underrepresented regions.
2022
How Hate Speech Varies by Target Identity: A Computational Analysis
Michael Miller Yoder | Lynnette Hui Xian Ng | David West Brown | Kathleen M. Carley
Proceedings of the 26th Conference on Computational Natural Language Learning (CoNLL)
Michael Miller Yoder | Lynnette Hui Xian Ng | David West Brown | Kathleen M. Carley
Proceedings of the 26th Conference on Computational Natural Language Learning (CoNLL)
This paper investigates how hate speech varies in systematic ways according to the identities it targets. Across multiple hate speech datasets annotated for targeted identities, we find that classifiers trained on hate speech targeting specific identity groups struggle to generalize to other targeted identities. This provides empirical evidence for differences in hate speech by target identity; we then investigate which patterns structure this variation. We find that the targeted demographic category (e.g. gender/sexuality or race/ethnicity) appears to have a greater effect on the language of hate speech than does the relative social power of the targeted identity group. We also find that words associated with hate speech targeting specific identities often relate to stereotypes, histories of oppression, current social movements, and other social contexts specific to identities. These experiments suggest the importance of considering targeted identity, as well as the social contexts associated with these identities, in automated hate speech classification
2020
I miss you babe: Analyzing Emotion Dynamics During COVID-19 Pandemic
Lynnette Hui Xian Ng | Roy Ka-Wei Lee | Md Rabiul Awal
Proceedings of the Fourth Workshop on Natural Language Processing and Computational Social Science
Lynnette Hui Xian Ng | Roy Ka-Wei Lee | Md Rabiul Awal
Proceedings of the Fourth Workshop on Natural Language Processing and Computational Social Science
With the world on a lockdown due to the COVID-19 pandemic, this paper studies emotions expressed on Twitter. Using a combined strategy of time series analysis of emotions augmented by tweet topics, this study provides an insight into emotion transitions during the pandemic. After tweets are annotated with dominant emotions and topics, a time-series emotion analysis is used to identify disgust and anger as the most commonly identified emotions. Through longitudinal analysis of each user, we construct emotion transition graphs, observing key transitions between disgust and anger, and self-transitions within anger and disgust emotional states. Observing user patterns through clustering of user longitudinal analyses reveals emotional transitions fall into four main clusters: (1) erratic motion over short period of time, (2) disgust -> anger, (3) optimism -> joy. (4) erratic motion over a prolonged period. Finally, we propose a method for predicting users subsequent topic, and by consequence their emotions, through constructing an Emotion Topic Hidden Markov Model, augmenting emotion transition states with topic information. Results suggests that the predictions fare better than baselines, spurring directions of predicting emotional states based on Twitter posts.
Search
Fix author
Co-authors
- Amit Agarwal 1
- Alham Fikri Aji 1
- Fadil Risdian Ansori 1
- David Anugraha 1
- Michael Anugraha 1
- Phakphum Artkaew 1
- Md Rabiul Awal 1
- Yeshil Bangera 1
- Mithil Bangera 1
- Anab Maulana Barik 1
- Shaily Bhatt 1
- David Brown 1
- Samuel Cahyawijaya 1
- Kathleen M. Carley 1
- Carlos Rafael Catalan 1
- Jan Christian Blaise Cruz 1
- John Amadeo Daniswara 1
- Mona Diab 1
- Meisyarah Dwiastuti 1
- Evan Evan 1
- Mohammad Rifqi Farhansyah 1
- Tirana Noor Fatyanosa 1
- Vicky Feliren 1
- Teddy Ferdinan 1
- Isaiah Edri W. Flores 1
- Rifo Ahmad Genadi 1
- Muhammad Ravi Shulthan Habibi 1
- M.Alif Al Hakim 1
- Kenneth Chen Ko Han 1
- Ikhlasul Akmal Hanif 1
- Rochana Prih Hastuti 1
- Ming Shan Hee 1
- Frederikus Hudi 1
- Mahardika Krisna Ihsani 1
- Fenal Ashokbhai Ilasariya 1
- Mohamed Fazli Mohamed Imam 1
- Joseph Marvin Imperial 1
- Piyalitt Ittichaiwong 1
- Audra Aurora Izzani 1
- Onno P. Kampman 1
- Börje F. Karlsson 1
- Kun Kerdthaisong 1
- Jun Kevin 1
- Aye Hninn Khine 1
- Fajri Koto 1
- Takdanai Kreangphet 1
- Jauza Akbar Krito 1
- Roy Ka-Wei Lee 1
- Haochen Li 1
- Adrian Xuan Wei Lim 1
- Wan Shen Lim 1
- Peerat Limkonchotiwat 1
- Jiarui Liu 1
- Holy Lovenia 1
- Jiayun Luo 1
- Ananya Malik 1
- Thant Thiri Maung 1
- Lester James Validad Miranda 1
- Patricia Nicole Monderin 1
- Joel Ruben Antony Moniz 1
- Adisai Na-Thalang 1
- Bahrul Ilmi Nasution 1
- Giang Nguyen 1
- William Nixon 1
- Kadek Hendrawan Palgunadi 1
- Hitesh Laxmichand Patel 1
- Priyaranjan Pattnayak 1
- Kaung Si Phyo 1
- Salsabila Zahirah Pranida 1
- Kevin Pratama 1
- Muhammad Reza Qorib 1
- Taki Hasan Rafi 1
- Rian Adam Rajagede 1
- Matthew Theodore Roque 1
- Jostin Jerico Rosal 1
- Manuel Antonio Rufino 1
- Saptarshi Saha 1
- Anjela Gail D. Santos 1
- Tim Santos 1
- Richardy Lobo Sapan 1
- Kartik Sharma 1
- Christian Simon 1
- Ayushman Singh 1
- Yueqi Song 1
- Supryadi 1
- Muhammad Rizky Sya’ban 1
- Filbert Aurelian Tjiaranata 1
- Can Udomcharoenchaikit 1
- Kanyakorn Veerakanjana 1
- Dan John Velasco 1
- Karissa Vincentio 1
- Bin Wang 1
- Chengwei Wei 1
- Robert Wijaya 1
- Genta Indra Winata 1
- Tack Hwa Wong 1
- Yunze Xiao 1
- Michael Miller Yoder 1
- Yanzhi Yu 1
- Eryawan Presma Yulianrifat 1
- Hanif Muhammad Zhafran 1
- Ruochen Zhang 1