Trung Huu Bui


2022

pdf bib
ViHealthBERT: Pre-trained Language Models for Vietnamese in Health Text Mining
Nguyen Minh | Vu Hoang Tran | Vu Hoang | Huy Duc Ta | Trung Huu Bui | Steven Quoc Hung Truong
Proceedings of the Thirteenth Language Resources and Evaluation Conference

Pre-trained language models have become crucial to achieving competitive results across many Natural Language Processing (NLP) problems. For monolingual pre-trained models in low-resource languages, the quantity has been significantly increased. However, most of them relate to the general domain, and there are limited strong baseline language models for domain-specific. We introduce ViHealthBERT, the first domain-specific pre-trained language model for Vietnamese healthcare. The performance of our model shows strong results while outperforming the general domain language models in all health-related datasets. Moreover, we also present Vietnamese datasets for the healthcare domain for two tasks are Acronym Disambiguation (AD) and Frequently Asked Questions (FAQ) Summarization. We release our ViHealthBERT to facilitate future research and downstream application for Vietnamese NLP in domain-specific. Our dataset and code are available in https://github.com/demdecuong/vihealthbert.