Self-training Reduces Flicker in Retranslation-based Simultaneous Translation

Sukanta Sen, Rico Sennrich, Biao Zhang, Barry Haddow


Abstract
In simultaneous translation, the retranslation approach has the advantage of requiring no modifications to the inference engine. However, in order to reduce the undesirable flicker in the output, previous work has resorted to increasing the latency through masking, and introducing specialised inference, thus losing the simplicity of the approach. In this work, we show that self-training improves the flicker-latency tradeoff, while maintaining similar translation quality to the original. Our analysis indicates that self-training reduces flicker by controlling monotonicity. Furthermore, self-training can be combined with biased beam search to further improve the flicker-latency tradeoff.
Anthology ID:
2023.eacl-main.270
Volume:
Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics
Month:
May
Year:
2023
Address:
Dubrovnik, Croatia
Editors:
Andreas Vlachos, Isabelle Augenstein
Venue:
EACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
3734–3744
Language:
URL:
https://aclanthology.org/2023.eacl-main.270
DOI:
10.18653/v1/2023.eacl-main.270
Bibkey:
Cite (ACL):
Sukanta Sen, Rico Sennrich, Biao Zhang, and Barry Haddow. 2023. Self-training Reduces Flicker in Retranslation-based Simultaneous Translation. In Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics, pages 3734–3744, Dubrovnik, Croatia. Association for Computational Linguistics.
Cite (Informal):
Self-training Reduces Flicker in Retranslation-based Simultaneous Translation (Sen et al., EACL 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.eacl-main.270.pdf