Subjectivity Detection in English News using Large Language Models

Mohammad Shokri, Vivek Sharma, Elena Filatova, Shweta Jain, Sarah Levitan


Abstract
Trust in media has reached a historical low as consumers increasingly doubt the credibility of the news they encounter. This growing skepticism is exacerbated by the prevalence of opinion-driven articles, which can influence readers’ beliefs to align with the authors’ viewpoints. In response to this trend, this study examines the expression of opinions in news by detecting subjective and objective language. We conduct an analysis of the subjectivity present in various news datasets and evaluate how different language models detect subjectivity and generalize to out-of-distribution data. We also investigate the use of in-context learning (ICL) within large language models (LLMs) and propose a straightforward prompting method that outperforms standard ICL and chain-of-thought (CoT) prompts.
Anthology ID:
2024.wassa-1.17
Volume:
Proceedings of the 14th Workshop on Computational Approaches to Subjectivity, Sentiment, & Social Media Analysis
Month:
August
Year:
2024
Address:
Bangkok, Thailand
Editors:
Orphée De Clercq, Valentin Barriere, Jeremy Barnes, Roman Klinger, João Sedoc, Shabnam Tafreshi
Venues:
WASSA | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
215–226
Language:
URL:
https://aclanthology.org/2024.wassa-1.17
DOI:
Bibkey:
Cite (ACL):
Mohammad Shokri, Vivek Sharma, Elena Filatova, Shweta Jain, and Sarah Levitan. 2024. Subjectivity Detection in English News using Large Language Models. In Proceedings of the 14th Workshop on Computational Approaches to Subjectivity, Sentiment, & Social Media Analysis, pages 215–226, Bangkok, Thailand. Association for Computational Linguistics.
Cite (Informal):
Subjectivity Detection in English News using Large Language Models (Shokri et al., WASSA-WS 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.wassa-1.17.pdf