Procedural Text Generation from a Photo Sequence

Taichi Nishimura, Atsushi Hashimoto, Shinsuke Mori


Abstract
Multimedia procedural texts, such as instructions and manuals with pictures, support people to share how-to knowledge. In this paper, we propose a method for generating a procedural text given a photo sequence allowing users to obtain a multimedia procedural text. We propose a single embedding space both for image and text enabling to interconnect them and to select appropriate words to describe a photo. We implemented our method and tested it on cooking instructions, i.e., recipes. Various experimental results showed that our method outperforms standard baselines.
Anthology ID:
W19-8650
Volume:
Proceedings of the 12th International Conference on Natural Language Generation
Month:
October–November
Year:
2019
Address:
Tokyo, Japan
Editors:
Kees van Deemter, Chenghua Lin, Hiroya Takamura
Venue:
INLG
SIG:
SIGGEN
Publisher:
Association for Computational Linguistics
Note:
Pages:
409–414
Language:
URL:
https://aclanthology.org/W19-8650
DOI:
10.18653/v1/W19-8650
Bibkey:
Cite (ACL):
Taichi Nishimura, Atsushi Hashimoto, and Shinsuke Mori. 2019. Procedural Text Generation from a Photo Sequence. In Proceedings of the 12th International Conference on Natural Language Generation, pages 409–414, Tokyo, Japan. Association for Computational Linguistics.
Cite (Informal):
Procedural Text Generation from a Photo Sequence (Nishimura et al., INLG 2019)
Copy Citation:
PDF:
https://aclanthology.org/W19-8650.pdf