Pubordee Aussavavirojekul


2024

pdf bib
On Creating an English-Thai Code-switched Machine Translation in Medical Domain
Parinthapat Pengpun | Krittamate Tiankanon | Amrest Chinkamol | Jiramet Kinchagawat | Pitchaya Chairuengjitjaras | Pasit Supholkhan | Pubordee Aussavavirojekul | Chiraphat Boonnag | Kanyakorn Veerakanjana | Hirunkul Phimsiri | Boonthicha Sae-jia | Nattawach Sataudom | Piyalitt Ittichaiwong | Peerat Limkonchotiwat
Findings of the Association for Computational Linguistics: EMNLP 2024

Machine translation (MT) in the medical domain plays a pivotal role in enhancing healthcare quality and disseminating medical knowledge. Despite advancements in English-Thai MT technology, common MT approaches often underperform in the medical field due to their inability to precisely translate medical terminologies. Our research prioritizes not merely improving translation accuracy but also maintaining medical terminology in English within the translated text through code-switched (CS) translation. We developed a method to produce CS medical translation data, fine-tuned a CS translation model with this data, and evaluated its performance against strong baselines, such as Google Neural Machine Translation (NMT) and GPT-3.5/GPT-4. Our model demonstrated competitive performance in automatic metrics and was highly favored in human preference evaluations. Our evaluation result also shows that medical professionals significantly prefer CS translations that maintain critical English terms accurately, even if it slightly compromises fluency. Our code and test set are publicly available https://github.com/preceptorai-org/NLLB_CS_EM_NLP2024.

pdf bib
SICAR at RRG2024: GPU Poor’s Guide to Radiology Report Generation
Kiartnarin Udomlapsakul | Parinthapat Pengpun | Tossaporn Saengja | Kanyakorn Veerakanjana | Krittamate Tiankanon | Pitikorn Khlaisamniang | Pasit Supholkhan | Amrest Chinkamol | Pubordee Aussavavirojekul | Hirunkul Phimsiri | Tara Sripo | Chiraphat Boonnag | Trongtum Tongdee | Thanongchai Siriapisith | Pairash Saiviroonporn | Jiramet Kinchagawat | Piyalitt Ittichaiwong
Proceedings of the 23rd Workshop on Biomedical Natural Language Processing

Radiology report generation (RRG) aims to create free-text radiology reports from clinical imaging. Our solution employs a lightweight multimodal language model (MLLM) enhanced with a two-stage post-processing strategy, utilizing a Large Language Model (LLM) to boost diagnostic accuracy and ensure patient safety. We introduce the “First, Do No Harm” SafetyNet, which incorporates Xraydar, an advanced X-ray classification model, to cross-verify the model outputs and specifically address false negatives from the MLLM. This comprehensive approach combines the efficiency of lightweight models with the robustness of thorough post-processing techniques, offering a reliable solution for radiology report generation. Our system achieved fourth place on the F1-Radgraph metric for findings generation in the Radiology Report Generation Shared Task (RRG24).