Tomohiro Yamasaki


2024

pdf bib
VE-KD: Vocabulary-Expansion Knowledge-Distillation for Training Smaller Domain-Specific Language Models
Pengju Gao | Tomohiro Yamasaki | Kazunori Imoto
Findings of the Association for Computational Linguistics: EMNLP 2024

We propose VE-KD, a novel method that balances knowledge distillation and vocabulary expansion with the aim of training efficient domain-specific language models. Compared with traditional pre-training approaches, VE-KD exhibits competitive performance in downstream tasks while reducing model size and using fewer computational resources. Additionally, VE-KD refrains from overfitting in domain adaptation. Our experiments with different biomedical domain tasks demonstrate that VE-KD performs well compared with models such as BioBERT (+1% at HoC) and PubMedBERT (+1% at PubMedQA), with about 96% less training time. Furthermore, it outperforms DistilBERT and Adapt-and-Distill, showing a significant improvement in document-level tasks. Investigation of vocabulary size and tolerance, which are hyperparameters of our method, provides insights for further model optimization. The fact that VE-KD consistently maintains its advantages, even when the corpus size is small, suggests that it is a practical approach for domain-specific language tasks and is transferrable to different domains for broader applications.

2022

pdf bib
Grapheme-to-Phoneme Conversion for Thai using Neural Regression Models
Tomohiro Yamasaki
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies

We propose a novel Thai grapheme-to-phoneme conversion method based on a neural regression model that is trained using neural networks to predict the similarity between a candidate and the correct pronunciation. After generating a set of candidates for an input word or phrase using the orthography rules, this model selects the best-similarity pronunciation from the candidates. This method can be applied to languages other than Thai simply by preparing enough orthography rules, and can reduce the mistakes that neural network models often make. We show that the accuracy of the proposed method is .931, which is comparable to that of encoder-decoder sequence models. We also demonstrate that the proposed method is superior in terms of the difference between correct and predicted pronunciations because incorrect, strange output sometimes occurs when using encoder-decoder sequence models but the error is within the expected range when using the proposed method.

2011

pdf bib
The Semi-Automatic Construction of Part-Of-Speech Taggers for Specific Languages by Statistical Methods
Tomohiro Yamasaki | Hiromi Wakaki | Masaru Suzuki
Proceedings of the 2nd Workshop on South Southeast Asian Natural Language Processing (WSSANLP)

pdf bib
Topic Models with Logical Constraints on Words
Hayato Kobayashi | Hiromi Wakaki | Tomohiro Yamasaki | Masaru Suzuki
Proceedings of Workshop on Robust Unsupervised and Semisupervised Methods in Natural Language Processing