Mitigating Framing Bias with Polarity Minimization Loss

Yejin Bang, Nayeon Lee, Pascale Fung


Abstract
Framing bias plays a significant role in exacerbating political polarization by distorting the perception of actual events. Media outlets with divergent political stances often use polarized language in their reporting of the same event. We propose a new loss function that encourages the model to minimize the polarity difference between the polarized input articles to reduce framing bias. Specifically, our loss is designed to jointly optimize the model to map polarity ends bidirectionally. Our experimental results demonstrate that incorporating the proposed polarity minimization loss leads to a substantial reduction in framing bias when compared to a BART-based multi-document summarization model. Notably, we find that the effectiveness of this approach is most pronounced when the model is trained to minimize the polarity loss associated with informational framing bias (i.e., skewed selection of information to report).
Anthology ID:
2023.findings-emnlp.742
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2023
Month:
December
Year:
2023
Address:
Singapore
Editors:
Houda Bouamor, Juan Pino, Kalika Bali
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
11100–11110
Language:
URL:
https://aclanthology.org/2023.findings-emnlp.742
DOI:
10.18653/v1/2023.findings-emnlp.742
Bibkey:
Cite (ACL):
Yejin Bang, Nayeon Lee, and Pascale Fung. 2023. Mitigating Framing Bias with Polarity Minimization Loss. In Findings of the Association for Computational Linguistics: EMNLP 2023, pages 11100–11110, Singapore. Association for Computational Linguistics.
Cite (Informal):
Mitigating Framing Bias with Polarity Minimization Loss (Bang et al., Findings 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.findings-emnlp.742.pdf
Video:
 https://aclanthology.org/2023.findings-emnlp.742.mp4