2024
pdf
bib
abs
Experience as Source for Anticipation and Planning: Experiential Policy Learning for Target-driven Recommendation Dialogues
Huy Quang Dao
|
Yang Deng
|
Khanh-Huyen Bui
|
Dung D. Le
|
Lizi Liao
Findings of the Association for Computational Linguistics: EMNLP 2024
Target-driven recommendation dialogues present unique challenges in dialogue management due to the necessity of anticipating user interactions for successful conversations. Current methods face significant limitations: (I) inadequate capabilities for conversation anticipation, (II) computational inefficiencies due to costly simulations, and (III) neglect of valuable past dialogue experiences. To address these limitations, we propose a new framework, Experiential Policy Learning (EPL), for enhancing such dialogues. EPL embodies the principle of Learning From Experience, facilitating anticipation with an experiential scoring function that estimates dialogue state potential using similar past interactions stored in long-term memory. To demonstrate its flexibility, we introduce Tree-structured EPL (T-EPL) as one possible training-free realization with Large Language Models (LLMs) and Monte-Carlo Tree Search (MCTS). T-EPL assesses past dialogue states with LLMs while utilizing MCTS to achieve hierarchical and multi-level reasoning. Extensive experiments on two published datasets demonstrate the superiority and efficacy of T-EPL.
2021
pdf
bib
abs
S-NLP at SemEval-2021 Task 5: An Analysis of Dual Networks for Sequence Tagging
Viet Anh Nguyen
|
Tam Minh Nguyen
|
Huy Quang Dao
|
Quang Huu Pham
Proceedings of the 15th International Workshop on Semantic Evaluation (SemEval-2021)
The SemEval 2021 task 5: Toxic Spans Detection is a task of identifying considered-toxic spans in text, which provides a valuable, automatic tool for moderating online contents. This paper represents the second-place method for the task, an ensemble of two approaches. While one approach relies on combining different embedding methods to extract diverse semantic and syntactic representations of words in context; the other utilizes extra data with a slightly customized Self-training, a semi-supervised learning technique, for sequence tagging problems. Both of our architectures take advantage of a strong language model, which was fine-tuned on a toxic classification task. Although experimental evidence indicates higher effectiveness of the first approach than the second one, combining them leads to our best results of 70.77 F1-score on the test dataset.
2020
pdf
bib
ReINTEL Challenge 2020: A Comparative Study of Hybrid Deep Neural Network for Reliable Intelligence Identification on Vietnamese SNSs
Hoang Viet Trinh
|
Tung Tien Bui
|
Tam Minh Nguyen
|
Huy Quang Dao
|
Quang Huu Pham
|
Ngoc N. Tran
Proceedings of the 7th International Workshop on Vietnamese Language and Speech Processing