ChatEdit: Towards Multi-turn Interactive Facial Image Editing via Dialogue

Xing Cui, Zekun Li, Pei Li, Yibo Hu, Hailin Shi, Chunshui Cao, Zhaofeng He


Abstract
This paper explores interactive facial image editing through dialogue and presents the ChatEdit benchmark dataset for evaluating image editing and conversation abilities in this context. ChatEdit is constructed from the CelebA-HQ dataset, incorporating annotated multi-turn dialogues corresponding to user editing requests on the images. The dataset is challenging, as it requires the system to dynamically track and edit images based on user requests, while generating appropriate natural language responses. To address these challenges, we propose a framework comprising a dialogue module for tracking user requests as well as generating responses, and an image editing module for editing images accordingly. Unlike previous approaches, our framework directly tracks the user request of the current turn from the entire dialogue history and edits the initial image instead of manipulating the output from the previous turn, mitigating error accumulation and attribute forgetting issues. Extensive experiments on the ChatEdit dataset demonstrate the superiority of our framework over previous methods and also improvement rooms, encouraging future research. We will release the code and data publicly to facilitate advancements in complex interactive facial image editing.
Anthology ID:
2023.emnlp-main.899
Volume:
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing
Month:
December
Year:
2023
Address:
Singapore
Editors:
Houda Bouamor, Juan Pino, Kalika Bali
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
14567–14583
Language:
URL:
https://aclanthology.org/2023.emnlp-main.899
DOI:
10.18653/v1/2023.emnlp-main.899
Bibkey:
Cite (ACL):
Xing Cui, Zekun Li, Pei Li, Yibo Hu, Hailin Shi, Chunshui Cao, and Zhaofeng He. 2023. ChatEdit: Towards Multi-turn Interactive Facial Image Editing via Dialogue. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 14567–14583, Singapore. Association for Computational Linguistics.
Cite (Informal):
ChatEdit: Towards Multi-turn Interactive Facial Image Editing via Dialogue (Cui et al., EMNLP 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.emnlp-main.899.pdf
Video:
 https://aclanthology.org/2023.emnlp-main.899.mp4