Wenjie Dong
2025
ProTOD: Proactive Task-oriented Dialogue System Based on Large Language Model
Wenjie Dong
|
Sirong Chen
|
Yan Yang
Proceedings of the 31st International Conference on Computational Linguistics
Large Language Model (LLM)-based Task-Oriented Dialogue (TOD) systems show promising performance in helping users achieve specific goals in a zero-shot setting. However, existing systems engage with users in a reactive manner, relying on a basic single-query mechanism with the knowledge base and employing passive policy planning. The proactive TOD systems, which can provide potentially helpful information and plan cross-domain multi-task dialogue policies, have not been well studied. In addition, effective evaluation methods are also lacking. To address these issues, we propose ProTOD, a novel LLM-based proactive TOD framework designed to improve system proactivity and goal completion. First, we design an adaptive exploratory retrieval mechanism to dynamically navigate domain knowledge. Second, we introduce a two-stage passive-to-proactive policy planner that effectively organizes knowledge and actions relationship. Finally, we develop two distinct user simulators with different personalities to simulate real-world interactions and propose a new error measure called Human-targeted Policy Edit Rate (HPER) for evaluation. Experimental results show that ProTOD achieves state-of-the-art (SOTA) performance, improving goal completion rates by 10% while significantly enhancing the proactive engagement.
2024
A Survey on Open Information Extraction from Rule-based Model to Large Language Model
Liu Pai
|
Wenyang Gao
|
Wenjie Dong
|
Lin Ai
|
Ziwei Gong
|
Songfang Huang
|
Li Zongsheng
|
Ehsan Hoque
|
Julia Hirschberg
|
Yue Zhang
Findings of the Association for Computational Linguistics: EMNLP 2024
Open Information Extraction (OpenIE) represents a crucial NLP task aimed at deriving structured information from unstructured text, unrestricted by relation type or domain. This survey paper provides an overview of OpenIE technologies spanning from 2007 to 2024, emphasizing a chronological perspective absent in prior surveys. It examines the evolution of task settings in OpenIE to align with the advances in recent technologies. The paper categorizes OpenIE approaches into rule-based, neural, and pre-trained large language models, discussing each within a chronological framework. Additionally, it highlights prevalent datasets and evaluation metrics currently in use. Building on this extensive review, this paper systematically reviews the evolution of task settings, data, evaluation metrics, and methodologies in the era of large language models, highlighting their mutual influence, comparing their capabilities, and examining their implications for open challenges and future research directions.
Search
Fix data
Co-authors
- Lin Ai 1
- Sirong Chen 1
- Wenyang Gao (高文炀) 1
- Ziwei Gong 1
- Julia Hirschberg 1
- show all...