Joint Generation of Captions and Subtitles with Dual Decoding

Jitao Xu, François Buet, Josep Crego, Elise Bertin-Lemée, François Yvon


Abstract
As the amount of audio-visual content increases, the need to develop automatic captioning and subtitling solutions to match the expectations of a growing international audience appears as the only viable way to boost throughput and lower the related post-production costs. Automatic captioning and subtitling often need to be tightly intertwined to achieve an appropriate level of consistency and synchronization with each other and with the video signal. In this work, we assess a dual decoding scheme to achieve a strong coupling between these two tasks and show how adequacy and consistency are increased, with virtually no additional cost in terms of model size and training complexity.
Anthology ID:
2022.iwslt-1.7
Volume:
Proceedings of the 19th International Conference on Spoken Language Translation (IWSLT 2022)
Month:
May
Year:
2022
Address:
Dublin, Ireland (in-person and online)
Editors:
Elizabeth Salesky, Marcello Federico, Marta Costa-jussà
Venue:
IWSLT
SIG:
SIGSLT
Publisher:
Association for Computational Linguistics
Note:
Pages:
74–82
Language:
URL:
https://aclanthology.org/2022.iwslt-1.7
DOI:
10.18653/v1/2022.iwslt-1.7
Bibkey:
Cite (ACL):
Jitao Xu, François Buet, Josep Crego, Elise Bertin-Lemée, and François Yvon. 2022. Joint Generation of Captions and Subtitles with Dual Decoding. In Proceedings of the 19th International Conference on Spoken Language Translation (IWSLT 2022), pages 74–82, Dublin, Ireland (in-person and online). Association for Computational Linguistics.
Cite (Informal):
Joint Generation of Captions and Subtitles with Dual Decoding (Xu et al., IWSLT 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.iwslt-1.7.pdf
Code
 jitao-xu/dual-decoding
Data
MuST-Cinema