Understanding Position Bias Effects on Fairness in Social Multi-Document Summarization

Olubusayo Olabisi, Ameeta Agrawal


Abstract
Text summarization models have typically focused on optimizing aspects of quality such as fluency, relevance, and coherence, particularly in the context of news articles. However, summarization models are increasingly being used to summarize diverse sources of text, such as social media data, that encompass a wide demographic user base. It is thus crucial to assess not only the quality of the generated summaries, but also the extent to which they can fairly represent the opinions of diverse social groups. Position bias, a long-known issue in news summarization, has received limited attention in the context of social multi-document summarization. We deeply investigate this phenomenon by analyzing the effect of group ordering in input documents when summarizing tweets from three distinct linguistic communities: African-American English, Hispanic-aligned Language, and White-aligned Language. Our empirical analysis shows that although the textual quality of the summaries remains consistent regardless of the input document order, in terms of fairness, the results vary significantly depending on how the dialect groups are presented in the input data. Our results suggest that position bias manifests differently in social multi-document summarization, severely impacting the fairness of summarization models.
Anthology ID:
2024.vardial-1.10
Volume:
Proceedings of the Eleventh Workshop on NLP for Similar Languages, Varieties, and Dialects (VarDial 2024)
Month:
June
Year:
2024
Address:
Mexico City, Mexico
Editors:
Yves Scherrer, Tommi Jauhiainen, Nikola Ljubešić, Marcos Zampieri, Preslav Nakov, Jörg Tiedemann
Venues:
VarDial | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
117–129
Language:
URL:
https://aclanthology.org/2024.vardial-1.10
DOI:
10.18653/v1/2024.vardial-1.10
Bibkey:
Cite (ACL):
Olubusayo Olabisi and Ameeta Agrawal. 2024. Understanding Position Bias Effects on Fairness in Social Multi-Document Summarization. In Proceedings of the Eleventh Workshop on NLP for Similar Languages, Varieties, and Dialects (VarDial 2024), pages 117–129, Mexico City, Mexico. Association for Computational Linguistics.
Cite (Informal):
Understanding Position Bias Effects on Fairness in Social Multi-Document Summarization (Olabisi & Agrawal, VarDial-WS 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.vardial-1.10.pdf
Supplementary material:
 2024.vardial-1.10.SupplementaryMaterial.txt