A Neural, Interactive-predictive System for Multimodal Sequence to Sequence Tasks

Álvaro Peris, Francisco Casacuberta


Abstract
We present a demonstration of a neural interactive-predictive system for tackling multimodal sequence to sequence tasks. The system generates text predictions to different sequence to sequence tasks: machine translation, image and video captioning. These predictions are revised by a human agent, who introduces corrections in the form of characters. The system reacts to each correction, providing alternative hypotheses, compelling with the feedback provided by the user. The final objective is to reduce the human effort required during this correction process. This system is implemented following a client-server architecture. For accessing the system, we developed a website, which communicates with the neural model, hosted in a local server. From this website, the different tasks can be tackled following the interactive–predictive framework. We open-source all the code developed for building this system. The demonstration in hosted in http://casmacat.prhlt.upv.es/interactive-seq2seq.
Anthology ID:
P19-3014
Volume:
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics: System Demonstrations
Month:
July
Year:
2019
Address:
Florence, Italy
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
81–86
Language:
URL:
https://aclanthology.org/P19-3014
DOI:
10.18653/v1/P19-3014
Bibkey:
Cite (ACL):
Álvaro Peris and Francisco Casacuberta. 2019. A Neural, Interactive-predictive System for Multimodal Sequence to Sequence Tasks. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics: System Demonstrations, pages 81–86, Florence, Italy. Association for Computational Linguistics.
Cite (Informal):
A Neural, Interactive-predictive System for Multimodal Sequence to Sequence Tasks (Peris & Casacuberta, ACL 2019)
Copy Citation:
PDF:
https://aclanthology.org/P19-3014.pdf
Code
 lvapeab/interactive-keras-captioning