%0 Conference Proceedings %T Benefiting from Language Similarity in the Multilingual MT Training: Case Study of Indonesian and Malaysian %A Poncelas, Alberto %A Effendi, Johanes %Y Ojha, Atul Kr. %Y Liu, Chao-Hong %Y Vylomova, Ekaterina %Y Abbott, Jade %Y Washington, Jonathan %Y Oco, Nathaniel %Y Pirinen, Tommi A. %Y Malykh, Valentin %Y Logacheva, Varvara %Y Zhao, Xiaobing %S Proceedings of the Fifth Workshop on Technologies for Machine Translation of Low-Resource Languages (LoResMT 2022) %D 2022 %8 October %I Association for Computational Linguistics %C Gyeongju, Republic of Korea %F poncelas-effendi-2022-benefiting %X The development of machine translation (MT) has been successful in breaking the language barrier of the world’s top 10-20 languages. However, for the rest of it, delivering an acceptable translation quality is still a challenge due to the limited resource. To tackle this problem, most studies focus on augmenting data while overlooking the fact that we can borrow high-quality natural data from the closely-related language. In this work, we propose an MT model training strategy by increasing the language directions as a means of augmentation in a multilingual setting. Our experiment result using Indonesian and Malaysian on the state-of-the-art MT model showcases the effectiveness and robustness of our method. %U https://aclanthology.org/2022.loresmt-1.11 %P 84-92