Masaki Itagaki


pdf bib
Post-MT Term Swapper: Supplementing a Statistical Machine Translation System with a User Dictionary
Masaki Itagaki | Takako Aikawa
Proceedings of the Sixth International Conference on Language Resources and Evaluation (LREC'08)

A statistical machine translation (SMT) system requires homogeneous training data in order to get domain-sensitive (or context-sensitive) terminology translations. If the data contains various domains, it is difficult for an SMT to learn context-sensitive terminology mappings probabilistically. Yet, terminology translation accuracy is an important issue for MT users. This paper explores an approach to tackle this terminology translation problem for an SMT. We propose a way to identify terminology translations from MT output and automatically swap them with user-defined translations. Our approach is simple and can be applied to any type of MT system. We call our prototype “Term Swapper”. Term Swapper allows MT users to draw on their own dictionaries without affecting any parts of the MT output except for the terminology translation(s) in question. Using an SMT developed at Microsoft Research, called MSR-MT (Quirk et al., (2005); Menezes & Quirk (2005)), we conducted initial experiments to investigate the coverage rate of Term Swapper and its impact on the overall quality of MT output. The results from our experiments show high coverage and positive impact on the overall MT quality.


pdf bib
Automatic validation of terminology translation consistenscy with statistical method
Masaki Itagaki | Takako Aikawa | Xiaodong He
Proceedings of Machine Translation Summit XI: Papers


pdf bib
Detecting Inter-domain Semantic Shift using Syntactic Similarity
Masaki Itagaki | Anthony Aue | Takako Aikawa
Proceedings of the Fifth International Conference on Language Resources and Evaluation (LREC’06)

This poster is a preliminary report of our experiments for detecting semantically shifted terms between different domains for the purposes of new concept extraction. A given term in one domain may represent a different concept in another domain. In our approach, we quantify the degree of similarity of words between different domains by measuring the degree of overlap in their domain-specific semantic spaces. The domain-specific semantic spaces are defined by extracting families of syntactically similar words, i.e. words that occur in the same syntactic context. Our method does not rely on any external resources other than a syntactic parser. Yet it has the potential to extract semantically shifted terms between two different domains automatically while paying close attention to contextual information. The organization of the poster is as follows: Section 1 provides our motivation. Section 2 provides an overview of our NLP technology and explains how we extract syntactically similar words. Section 3 describes the design of our experiments and our method. Section 4 provides our observations and preliminary results. Section 5 presents some work to be done in the future and concluding remarks.