Exploring User Dissatisfaction: Taxonomy of Implicit Negative Feedback in Virtual Assistants

Moushumi Mahato, Avinash Kumar, Kartikey Singh, Javaid Nabi, Debojyoti Saha, Krishna Singh


Abstract
The success of virtual assistants relies on continuous performance monitoring to ensure their competitive edge in the market. This entails assessing their ability to understand user intents and execute tasks effectively. While user feedback is pivotal for measuring satisfaction levels, relying solely on explicit feedback proves impractical. Thus, extracting implicit user feedback from conversations of user and virtual assistant is a more efficient approach. Additionally, along with learning whether a task is performed correctly or not, it is extremely important to understand the reasons behind any incorrect execution. In this paper, we introduce a framework designed to identify dissatisfactory conversations, systematically analyze these conversations, and generate comprehensive reports detailing the reasons for user dissatisfaction. By implementing a feedback classifier, we identify conversations that indicate user dissatisfaction, which serves as a sign of implicit negative feedback. To analyze negative feedback conversations more deeply, we develop a lightweight pipeline called an issue categorizer ensemble with multiple models to understand the reasons behind such dissatisfactory conversations. We subsequently augment the identified discontented instances to generate additional data and train our models to prevent such failures in the future. Our implementation of this simple framework, called AsTrix (Assisted Triage and Fix), led to significant enhancements in the performance of our smartphone-based In-House virtual assistant, with successful task completion rates increasing from 83.1% to 92.6% between June 2022 and March 2024. Moreover, by automating the deeper analysis process targeting just five major issue types contributing to the dissatisfaction, we significantly address approximately 62% of the negative feedback conversation data.
Anthology ID:
2024.icon-1.26
Volume:
Proceedings of the 21st International Conference on Natural Language Processing (ICON)
Month:
December
Year:
2024
Address:
AU-KBC Research Centre, Chennai, India
Editors:
Sobha Lalitha Devi, Karunesh Arora
Venue:
ICON
SIG:
Publisher:
NLP Association of India (NLPAI)
Note:
Pages:
230–242
Language:
URL:
https://aclanthology.org/2024.icon-1.26/
DOI:
Bibkey:
Cite (ACL):
Moushumi Mahato, Avinash Kumar, Kartikey Singh, Javaid Nabi, Debojyoti Saha, and Krishna Singh. 2024. Exploring User Dissatisfaction: Taxonomy of Implicit Negative Feedback in Virtual Assistants. In Proceedings of the 21st International Conference on Natural Language Processing (ICON), pages 230–242, AU-KBC Research Centre, Chennai, India. NLP Association of India (NLPAI).
Cite (Informal):
Exploring User Dissatisfaction: Taxonomy of Implicit Negative Feedback in Virtual Assistants (Mahato et al., ICON 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.icon-1.26.pdf