Text Generation Indistinguishable from Target Person by Prompting Few Examples Using LLM

Yuka Tsubota, Yoshinobu Kano


Abstract
To achieve smooth and natural communication between a dialogue system and a human, it is necessary for the dialogue system to behave more human-like. Recreating the personality of an actual person can be an effective way for this purpose. This study proposes a method to recreate a personality by a large language model (generative AI) without training, but with prompt technique to make the creation cost as low as possible. Collecting a large amount of dialogue data from a specific person is not easy and requires a significant amount of time for training. Therefore, we aim to recreate the personality of a specific individual without using dialogue data. The personality referred to in this paper denotes the image of a person that can be determined solely from the input and output of text dialogues. As a result of the experiments, it was revealed that by using prompts combining profile information, responses to few questions, and extracted speaking characteristics from those responses, it is possible to improve the reproducibility of a specific individual’s personality.
Anthology ID:
2024.aiwolfdial-1.2
Volume:
Proceedings of the 2nd International AIWolfDial Workshop
Month:
September
Year:
2024
Address:
Tokyo, Japan
Editor:
Yoshinobu Kano
Venues:
AIWolfDial | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
13–20
Language:
URL:
https://aclanthology.org/2024.aiwolfdial-1.2
DOI:
Bibkey:
Cite (ACL):
Yuka Tsubota and Yoshinobu Kano. 2024. Text Generation Indistinguishable from Target Person by Prompting Few Examples Using LLM. In Proceedings of the 2nd International AIWolfDial Workshop, pages 13–20, Tokyo, Japan. Association for Computational Linguistics.
Cite (Informal):
Text Generation Indistinguishable from Target Person by Prompting Few Examples Using LLM (Tsubota & Kano, AIWolfDial-WS 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.aiwolfdial-1.2.pdf