Yuhan Song
2024
Would You Like to Make a Donation? A Dialogue System to Persuade You to Donate
Yuhan Song
|
Houfeng Wang
Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)
Persuasive dialogue is a type of dialogue commonly used in human daily life in scenarios such as promotion and sales. Its purpose is to influence the decision, attitude or behavior of another person through the dialogue process. Persuasive automated dialogue systems can be applied in a variety of fields such as charity, business, education, and healthcare. Regardless of their amazing abilities, Large Language Models (LLMs) such as ChatGPT still have limitations in persuasion. There is few research dedicated to persuasive dialogue in the current research of automated dialogue systems. In this paper, we introduce a persuasive automated dialogue system. In the system, a context-aware persuasion strategy selection module makes dialogue system flexibly use different persuasion strategies to persuade users; Then a natural language generation module is used to output a response. We also propose a persuasiveness prediction model to automatically evaluate the persuasiveness of generated text. Experimental results show that our dialogue system can achieve better performance on several automated evaluation metrics than baseline models.
2022
Unsupervised Chinese Word Segmentation with BERT Oriented Probing and Transformation
Wei Li
|
Yuhan Song
|
Qi Su
|
Yanqiu Shao
Findings of the Association for Computational Linguistics: ACL 2022
Word Segmentation is a fundamental step for understanding Chinese language. Previous neural approaches for unsupervised Chinese Word Segmentation (CWS) only exploits shallow semantic information, which can miss important context. Large scale Pre-trained language models (PLM) have achieved great success in many areas because of its ability to capture the deep contextual semantic relation. In this paper, we propose to take advantage of the deep semantic information embedded in PLM (e.g., BERT) with a self-training manner, which iteratively probes and transforms the semantic information in PLM into explicit word segmentation ability. Extensive experiment results show that our proposed approach achieves state-of-the-art F1 score on two CWS benchmark datasets.
Search