Hajung Kim
2024
KU-DMIS at EHRSQL 2024 : Generating SQL query via question templatization in EHR
Hajung Kim
|
Chanhwi Kim
|
Hoonick Lee
|
Kyochul Jang
|
Jiwoo Lee
|
Kyungjae Lee
|
Gangwoo Kim
|
Jaewoo Kang
Proceedings of the 6th Clinical Natural Language Processing Workshop
Transforming natural language questions into SQL queries is crucial for precise data retrieval from electronic health record (EHR) databases. A significant challenge in this process is detecting and rejecting unanswerable questions that request information outside the database’s scope or exceed the system’s capabilities. In this paper, we introduce a novel text-to-SQL framework that focuses on standardizing the structure of questions into a templated format. Our framework begins by fine-tuning GPT-3.5-turbo, a powerful large language model (LLM), with detailed prompts involving the table schemas of the EHR database system. Our approach shows promising results on the EHRSQL-2024 benchmark dataset, part of the ClinicalNLP shared task. Although fine-tuning GPT achieves third place on the development set, it struggled with the diverse questions in the test set. With our framework, we improve our system’s adaptability and achieve fourth position in the official leaderboard of the EHRSQL-2024 challenge.
2023
KU-DMIS-MSRA at RadSum23: Pre-trained Vision-Language Model for Radiology Report Summarization
Gangwoo Kim
|
Hajung Kim
|
Lei Ji
|
Seongsu Bae
|
Chanhwi Kim
|
Mujeen Sung
|
Hyunjae Kim
|
Kun Yan
|
Eric Chang
|
Jaewoo Kang
The 22nd Workshop on Biomedical Natural Language Processing and BioNLP Shared Tasks
In this paper, we introduce CheXOFA, a new pre-trained vision-language model (VLM) for the chest X-ray domain. Our model is initially pre-trained on various multimodal datasets within the general domain before being transferred to the chest X-ray domain. Following a prominent VLM, we unify various domain-specific tasks into a simple sequence-to-sequence schema. It enables the model to effectively learn the required knowledge and skills from limited resources in the domain. Demonstrating superior performance on the benchmark datasets provided by the BioNLP shared task (Delbrouck et al., 2023), our model benefits from its training across multiple tasks and domains. With subtle techniques including ensemble and factual calibration, our system achieves first place on the RadSum23 leaderboard for the hidden test set.
Search
Co-authors
- Gangwoo Kim 2
- Chanhwi Kim 2
- Jaewoo Kang 2
- Lei Ji 1
- Seongsu Bae 1
- show all...