Multi-modal Context Modelling for Machine Translation

Lucia Specia


Abstract
MultiMT is an European Research Council Starting Grant whose aim is to devise data, methods and algorithms to exploit multi-modal information (images, audio, metadata) for context modelling in machine translation and other cross- lingual tasks. The project draws upon different research fields including natural language processing, computer vision, speech processing and machine learning.
Anthology ID:
2018.eamt-main.55
Volume:
Proceedings of the 21st Annual Conference of the European Association for Machine Translation
Month:
May
Year:
2018
Address:
Alicante, Spain
Editors:
Juan Antonio Pérez-Ortiz, Felipe Sánchez-Martínez, Miquel Esplà-Gomis, Maja Popović, Celia Rico, André Martins, Joachim Van den Bogaert, Mikel L. Forcada
Venue:
EAMT
SIG:
Publisher:
Note:
Pages:
383
Language:
URL:
https://aclanthology.org/2018.eamt-main.55
DOI:
Bibkey:
Cite (ACL):
Lucia Specia. 2018. Multi-modal Context Modelling for Machine Translation. In Proceedings of the 21st Annual Conference of the European Association for Machine Translation, page 383, Alicante, Spain.
Cite (Informal):
Multi-modal Context Modelling for Machine Translation (Specia, EAMT 2018)
Copy Citation:
PDF:
https://aclanthology.org/2018.eamt-main.55.pdf