Nicholas Deas
2024
MASIVE: Open-Ended Affective State Identification in English and Spanish
Nicholas Deas
|
Elsbeth Turcan
|
Ivan Ernesto Perez Mejia
|
Kathleen McKeown
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
In the field of emotion analysis, much NLP research focuses on identifying a limited number of discrete emotion categories, often applied across languages. These basic sets, however, are rarely designed with textual data in mind, and culture, language, and dialect can influence how particular emotions are interpreted. In this work, we broaden our scope to a practically unbounded set of affective states, which includes any terms that humans use to describe their experiences of feeling. We collect and publish MASIVE, a dataset of Reddit posts in English and Spanish containing over 1,000 unique affective states each. We then define the new problem of affective state identification for language generation models framed as a masked span prediction task. On this task, we find that smaller finetuned multilingual models outperform much larger LLMs, even on region-specific Spanish affective states. Additionally, we show that pretraining on MASIVE improves model performance on existing emotion benchmarks. Finally, through machine translation experiments, we find that native speaker-written data is vital to good performance on this task.
2023
Evaluation of African American Language Bias in Natural Language Generation
Nicholas Deas
|
Jessica Grieser
|
Shana Kleiner
|
Desmond Patton
|
Elsbeth Turcan
|
Kathleen McKeown
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing
While biases disadvantaging African American Language (AAL) have been uncovered in models for tasks such as speech recognition and toxicity detection, there has been little investigation of these biases for language generation models like ChatGPT. We evaluate how well LLMs understand AAL in comparison to White Mainstream English (WME), the encouraged “standard” form of English taught in American classrooms. We measure large language model performance on two tasks: a counterpart generation task, where a model generates AAL given WME and vice versa, and a masked span prediction (MSP) task, where models predict a phrase hidden from their input. Using a novel dataset of AAL texts from a variety of regions and contexts, we present evidence of dialectal bias for six pre-trained LLMs through performance gaps on these tasks.
Search
Co-authors
- Elsbeth Turcan 2
- Kathleen Mckeown 2
- Ivan Ernesto Perez Mejia 1
- Jessica Grieser 1
- Shana Kleiner 1
- show all...