Mitigating Topic Bias when Detecting Decisions in Dialogue

Mladen Karan, Prashant Khare, Patrick Healey, Matthew Purver


Abstract
This work revisits the task of detecting decision-related utterances in multi-party dialogue. We explore performance of a traditional approach and a deep learning-based approach based on transformer language models, with the latter providing modest improvements. We then analyze topic bias in the models using topic information obtained by manual annotation. Our finding is that when detecting some types of decisions in our data, models rely more on topic specific words that decisions are about rather than on words that more generally indicate decision making. We further explore this by removing topic information from the train data. We show that this resolves the bias issues to an extent and, surprisingly, sometimes even boosts performance.
Anthology ID:
2021.sigdial-1.56
Volume:
Proceedings of the 22nd Annual Meeting of the Special Interest Group on Discourse and Dialogue
Month:
July
Year:
2021
Address:
Singapore and Online
Editors:
Haizhou Li, Gina-Anne Levow, Zhou Yu, Chitralekha Gupta, Berrak Sisman, Siqi Cai, David Vandyke, Nina Dethlefs, Yan Wu, Junyi Jessy Li
Venue:
SIGDIAL
SIG:
SIGDIAL
Publisher:
Association for Computational Linguistics
Note:
Pages:
542–547
Language:
URL:
https://aclanthology.org/2021.sigdial-1.56
DOI:
10.18653/v1/2021.sigdial-1.56
Bibkey:
Cite (ACL):
Mladen Karan, Prashant Khare, Patrick Healey, and Matthew Purver. 2021. Mitigating Topic Bias when Detecting Decisions in Dialogue. In Proceedings of the 22nd Annual Meeting of the Special Interest Group on Discourse and Dialogue, pages 542–547, Singapore and Online. Association for Computational Linguistics.
Cite (Informal):
Mitigating Topic Bias when Detecting Decisions in Dialogue (Karan et al., SIGDIAL 2021)
Copy Citation:
PDF:
https://aclanthology.org/2021.sigdial-1.56.pdf
Video:
 https://www.youtube.com/watch?v=vJiJn1cjFH0