%0 Conference Proceedings %T Should we find another model?: Improving Neural Machine Translation Performance with ONE-Piece Tokenization Method without Model Modification %A Park, Chanjun %A Eo, Sugyeong %A Moon, Hyeonseok %A Lim, Heuiseok %Y Kim, Young-bum %Y Li, Yunyao %Y Rambow, Owen %S Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: Industry Papers %D 2021 %8 June %I Association for Computational Linguistics %C Online %F park-etal-2021-find %X Most of the recent Natural Language Processing(NLP) studies are based on the Pretrain-Finetuning Approach (PFA), but in small and medium-sized enterprises or companies with insufficient hardware there are many limitations to servicing NLP application software using such technology due to slow speed and insufficient memory. The latest PFA technologies require large amounts of data, especially for low-resource languages, making them much more difficult to work with. We propose a new tokenization method, ONE-Piece, to address this limitation that combines the morphology-considered subword tokenization method and the vocabulary method used after probing for an existing method that has not been carefully considered before. Our proposed method can also be used without modifying the model structure. We experiment by applying ONE-Piece to Korean, a morphologically-rich and low-resource language. We derive an optimal subword tokenization result for Korean-English machine translation by conducting a case study that combines the subword tokenization method, morphological segmentation, and vocabulary method. Through comparative experiments with all the tokenization methods currently used in NLP research, ONE-Piece achieves performance comparable to the current Korean-English machine translation state-of-the-art model. %R 10.18653/v1/2021.naacl-industry.13 %U https://aclanthology.org/2021.naacl-industry.13 %U https://doi.org/10.18653/v1/2021.naacl-industry.13 %P 97-104