2024
pdf
bib
abs
Redefining Proactivity for Information Seeking Dialogue
Jing Yang Lee
|
Seokhwan Kim
|
Kartik Mehta
|
Jiun-Yu Kao
|
Yu-Hsiang Lin
|
Arpit Gupta
Proceedings of the Second Workshop on Social Influence in Conversations (SICon 2024)
Humans pay careful attention to the interlocutor’s internal state in dialogues. For example, in recommendation dialogues, we make recommendations while estimating the seeker’s internal state, such as his/her level of knowledge and interest. Since there are no existing annotated resources for the analysis and experiment, we constructed RecomMind, a movie recommendation dialogue dataset with annotations of the seeker’s internal state at the entity level. Each entity has a first-person label annotated by the seeker and a second-person label annotated by the recommender. Our analysis based on RecomMind reveals that the success of recommendations is enhanced when recommenders mention entities that seekers do not know but are interested in. We also propose a response generation framework that explicitly considers the seeker’s internal state, utilizing the chain-of-thought prompting. The human evaluation results show that our proposed method outperforms the baseline method in both consistency and the success of recommendations.
2023
pdf
bib
Partially Randomizing Transformer Weights for Dialogue Response Diversity
Jing Yang Lee
|
Kong Aik Lee
|
Woon-Seng Gan
Proceedings of the 37th Pacific Asia Conference on Language, Information and Computation
pdf
bib
abs
An Empirical Bayes Framework for Open-Domain Dialogue Generation
Jing Yang Lee
|
Kong Aik Lee
|
Woon Seng Gan
Proceedings of the Third Workshop on Natural Language Generation, Evaluation, and Metrics (GEM)
To engage human users in meaningful conversation, open-domain dialogue agents are required to generate diverse and contextually coherent dialogue. Despite recent advancements, which can be attributed to the usage of pretrained language models, the generation of diverse and coherent dialogue remains an open research problem. A popular approach to address this issue involves the adaptation of variational frameworks. However, while these approaches successfully improve diversity, they tend to compromise on contextual coherence. Hence, we propose the Bayesian Open-domain Dialogue with Empirical Bayes (BODEB) framework, an empirical bayes framework for constructing an Bayesian open-domain dialogue agent by leveraging pretrained parameters to inform the prior and posterior parameter distributions. Empirical results show that BODEB achieves better results in terms of both diversity and coherence compared to variational frameworks.
2022
pdf
bib
abs
A Randomized Link Transformer for Diverse Open-Domain Dialogue Generation
Jing Yang Lee
|
Kong Aik Lee
|
Woon Seng Gan
Proceedings of the 4th Workshop on NLP for Conversational AI
A major issue in open-domain dialogue generation is the agent’s tendency to generate repetitive and generic responses. The lack in response diversity has been addressed in recent years via the use of latent variable models, such as the Conditional Variational Auto-Encoder (CVAE), which typically involve learning a latent Gaussian distribution over potential response intents. However, due to latent variable collapse, training latent variable dialogue models are notoriously complex, requiring substantial modification to the standard training process and loss function. Other approaches proposed to improve response diversity also largely entail a significant increase in training complexity. Hence, this paper proposes a Randomized Link (RL) Transformer as an alternative to the latent variable models. The RL Transformer does not require any additional enhancements to the training process or loss function. Empirical results show that, when it comes to response diversity, the RL Transformer achieved comparable performance compared to latent variable models.