Media Bias Detection Across Families of Language Models

Iffat Maab, Edison Marrese-Taylor, Sebastian Padó, Yutaka Matsuo


Abstract
Bias in reporting can influence the public’s opinion on relevant societal issues. Examples include informational bias (selective presentation of content) and lexical bias (specific framing of content through linguistic choices). The recognition of media bias is arguably an area where NLP can contribute to the “social good”. Traditional NLP models have shown good performance in classifying media bias, but require careful model design and extensive tuning. In this paper, we ask how well prompting of large language models can recognize media bias. Through an extensive empirical study including a wide selection of pre-trained models, we find that prompt-based techniques can deliver comparable performance to traditional models with greatly reduced effort and that, similar to traditional models, the availability of context substantially improves results. We further show that larger models can leverage different kinds of context simultaneously, obtaining further performance improvements.
Anthology ID:
2024.naacl-long.227
Volume:
Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)
Month:
June
Year:
2024
Address:
Mexico City, Mexico
Editors:
Kevin Duh, Helena Gomez, Steven Bethard
Venue:
NAACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
4083–4098
Language:
URL:
https://aclanthology.org/2024.naacl-long.227
DOI:
10.18653/v1/2024.naacl-long.227
Bibkey:
Cite (ACL):
Iffat Maab, Edison Marrese-Taylor, Sebastian Padó, and Yutaka Matsuo. 2024. Media Bias Detection Across Families of Language Models. In Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers), pages 4083–4098, Mexico City, Mexico. Association for Computational Linguistics.
Cite (Informal):
Media Bias Detection Across Families of Language Models (Maab et al., NAACL 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.naacl-long.227.pdf