Saeed Hassanpour


2024

pdf bib
Addressing Healthcare-related Racial and LGBTQ+ Biases in Pretrained Language Models
Sean Xie | Saeed Hassanpour | Soroush Vosoughi
Findings of the Association for Computational Linguistics: NAACL 2024

Recent studies have highlighted the issue of Pretrained Language Models (PLMs) inadvertently propagating social stigmas and stereotypes, a critical concern given their widespread use. This is particularly problematic in sensitive areas like healthcare, where such biases could lead to detrimental outcomes. Our research addresses this by adapting two intrinsic bias benchmarks to quantify racial and LGBTQ+ biases in prevalent PLMs. We also empirically evaluate the effectiveness of various debiasing methods in mitigating these biases. Furthermore, we assess the impact of debiasing on both Natural Language Understanding and specific biomedical applications. Our findings reveal that while PLMs commonly exhibit healthcare-related racial and LGBTQ+ biases, the applied debiasing techniques successfully reduce these biases without compromising the models’ performance in downstream tasks.

pdf bib
MentalManip: A Dataset For Fine-grained Analysis of Mental Manipulation in Conversations
Yuxin Wang | Ivory Yang | Saeed Hassanpour | Soroush Vosoughi
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

Mental manipulation, a significant form of abuse in interpersonal conversations, presents a challenge to identify due to its context-dependent and often subtle nature. The detection of manipulative language is essential for protecting potential victims, yet the field of Natural Language Processing (NLP) currently faces a scarcity of resources and research on this topic. Our study addresses this gap by introducing a new dataset, named MentalManip, which consists of 4,000 annotated fictional dialogues. This dataset enables a comprehensive analysis of mental manipulation, pinpointing both the techniques utilized for manipulation and the vulnerabilities targeted in victims. Our research further explores the effectiveness of leading-edge models in recognizing manipulative dialogue and its components through a series of experiments with various configurations. The results demonstrate that these models inadequately identify and categorize manipulative content. Attempts to improve their performance by fine-tuning with existing datasets on mental health and toxicity have not overcome these limitations. We anticipate that MentalManip will stimulate further research, leading to progress in both understanding and mitigating the impact of mental manipulation in conversations.

2023

pdf bib
Improving Syntactic Probing Correctness and Robustness with Control Tasks
Weicheng Ma | Brian Wang | Hefan Zhang | Lili Wang | Rolando Coto-Solano | Saeed Hassanpour | Soroush Vosoughi
Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)

Syntactic probing methods have been used to examine whether and how pre-trained language models (PLMs) encode syntactic features. However, the probing methods are usually biased by the PLMs’ memorization of common word co-occurrences, even if they do not form syntactic relations. This paper presents a random-word-substitution and random-label-matching control task to reduce these biases and improve the robustness of syntactic probing methods. Our control tasks are also shown to notably improve the consistency of probing results between different probing methods and make the methods more robust with respect to the text attributes of the probing instances. Our control tasks make syntactic probing methods better at reconstructing syntactic features and more generalizable to unseen text domains. Our experiments show that our proposed control tasks are effective on different PLMs, probing methods, and syntactic features.

pdf bib
Proto-lm: A Prototypical Network-Based Framework for Built-in Interpretability in Large Language Models
Sean Xie | Soroush Vosoughi | Saeed Hassanpour
Findings of the Association for Computational Linguistics: EMNLP 2023

Large Language Models (LLMs) have significantly advanced the field of Natural Language Processing (NLP), but their lack of interpretability has been a major concern. Current methods for interpreting LLMs are post hoc, applied after inference time, and have limitations such as their focus on low-level features and lack of explainability at higher-level text units. In this work, we introduce proto-lm, a prototypical network-based white-box framework that allows LLMs to learn immediately interpretable embeddings during the fine-tuning stage while maintaining competitive performance. Our method’s applicability and interpretability are demonstrated through experiments on a wide range of NLP tasks, and our results indicate a new possibility of creating interpretable models without sacrificing performance. This novel approach to interpretability in LLMs can pave the way for more interpretable models without the need to sacrifice performance. We release our code at https://github.com/yx131/proto-lm.