Felix Faltings
2023
Interactive Text Generation
Felix Faltings
|
Michel Galley
|
Kianté Brantley
|
Baolin Peng
|
Weixin Cai
|
Yizhe Zhang
|
Jianfeng Gao
|
Bill Dolan
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing
Users interact with text, image, code, or other editors on a daily basis. However, machine learning models are rarely trained in the settings that reflect the interactivity between users and their editor. This is understandable as training AI models with real users is not only slow and costly, but what these models learn may be specific to user interface design choices. Unfortunately, this means most of the research on text, code, and image generation has focused on non-interactive settings, whereby the model is expected to get everything right without accounting for any input from a user who may be willing to help. We introduce a new Interactive Text Generation task that allows training generation models interactively without the costs of involving real users, by using user simulators that provide edits that guide the model towards a given target text. We train our interactive models using Imitation Learning, and our experiments against competitive non-interactive generation models show that models trained interactively are superior to their non-interactive counterparts, even when all models are given the same budget of user inputs or edits.
2021
Text Editing by Command
Felix Faltings
|
Michel Galley
|
Gerold Hintz
|
Chris Brockett
|
Chris Quirk
|
Jianfeng Gao
|
Bill Dolan
Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
A prevailing paradigm in neural text generation is one-shot generation, where text is produced in a single step. The one-shot setting is inadequate, however, when the constraints the user wishes to impose on the generated text are dynamic, especially when authoring longer documents. We address this limitation with an interactive text generation setting in which the user interacts with the system by issuing commands to edit existing text. To this end, we propose a novel text editing task, and introduce WikiDocEdits, a dataset of single-sentence edits crawled from Wikipedia. We show that our Interactive Editor, a transformer-based model trained on this dataset, outperforms baselines and obtains positive results in both automatic and human evaluations. We present empirical and qualitative analyses of this model’s performance.
Search
Co-authors
- Baolin Peng 1
- Chris Brockett 1
- Chris Quirk 1
- Gerold Hintz 1
- Jianfeng Gao 2
- show all...