2024
pdf
bib
abs
SLM as Guardian: Pioneering AI Safety with Small Language Model
Ohjoon Kwon
|
Donghyeon Jeon
|
Nayoung Choi
|
Gyu-Hwung Cho
|
Hwiyeol Jo
|
Changbong Kim
|
Hyunwoo Lee
|
Inho Kang
|
Sun Kim
|
Taiwoo Park
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing: Industry Track
Most prior safety research of large language models (LLMs) has focused on enhancing the alignment of LLMs to better suit the safety requirements of their use cases. However, internalizing such safeguard features into larger models brought challenges of higher training cost and unintended degradation of helpfulness. In this paper, we leverage a smaller LLM for both harmful query detection and safeguard response generation. We introduce our safety requirements and the taxonomy of harmfulness categories, and then propose a multi-task learning mechanism fusing the two tasks into a single model. We demonstrate the effectiveness of our approach, providing on par or surpassing harmful query detection and safeguard response performance compared to the publicly available LLMs.
2023
pdf
bib
abs
Clinical Note Owns its Hierarchy: Multi-Level Hypergraph Neural Networks for Patient-Level Representation Learning
Nayeon Kim
|
Yinhua Piao
|
Sun Kim
Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Leveraging knowledge from electronic health records (EHRs) to predict a patient’s condition is essential to the effective delivery of appropriate care. Clinical notes of patient EHRs contain valuable information from healthcare professionals, but have been underused due to their difficult contents and complex hierarchies. Recently, hypergraph-based methods have been proposed for document classifications. Directly adopting existing hypergraph methods on clinical notes cannot sufficiently utilize the hierarchy information of the patient, which can degrade clinical semantic information by (1) frequent neutral words and (2) hierarchies with imbalanced distribution. Thus, we propose a taxonomy-aware multi-level hypergraph neural network (TM-HGNN), where multi-level hypergraphs assemble useful neutral words with rare keywords via note and taxonomy level hyperedges to retain the clinical semantic information. The constructed patient hypergraphs are fed into hierarchical message passing layers for learning more balanced multi-level knowledge at the note and taxonomy levels. We validate the effectiveness of TM-HGNN by conducting extensive experiments with MIMIC-III dataset on benchmark in-hospital-mortality prediction.
2017
pdf
bib
abs
BioCreative VI Precision Medicine Track: creating a training corpus for mining protein-protein interactions affected by mutations
Rezarta Islamaj Doğan
|
Andrew Chatr-aryamontri
|
Sun Kim
|
Chih-Hsuan Wei
|
Yifan Peng
|
Donald Comeau
|
Zhiyong Lu
BioNLP 2017
The Precision Medicine Track in BioCre-ative VI aims to bring together the Bi-oNLP community for a novel challenge focused on mining the biomedical litera-ture in search of mutations and protein-protein interactions (PPI). In order to support this track with an effective train-ing dataset with limited curator time, the track organizers carefully reviewed Pub-Med articles from two different sources: curated public PPI databases, and the re-sults of state-of-the-art public text mining tools. We detail here the data collection, manual review and annotation process and describe this training corpus charac-teristics. We also describe a corpus per-formance baseline. This analysis will provide useful information to developers and researchers for comparing and devel-oping innovative text mining approaches for the BioCreative VI challenge and other Precision Medicine related applica-tions.
pdf
bib
abs
Deep Learning for Biomedical Information Retrieval: Learning Textual Relevance from Click Logs
Sunil Mohan
|
Nicolas Fiorini
|
Sun Kim
|
Zhiyong Lu
BioNLP 2017
We describe a Deep Learning approach to modeling the relevance of a document’s text to a query, applied to biomedical literature. Instead of mapping each document and query to a common semantic space, we compute a variable-length difference vector between the query and document which is then passed through a deep convolution stage followed by a deep regression network to produce the estimated probability of the document’s relevance to the query. Despite the small amount of training data, this approach produces a more robust predictor than computing similarities between semantic vector representations of the query and document, and also results in significant improvements over traditional IR text factors. In the future, we plan to explore its application in improving PubMed search.
2016
pdf
bib
PubTermVariants: biomedical term variants and their use for PubMed search
Lana Yeganova
|
Won Kim
|
Sun Kim
|
Rezarta Islamaj Doğan
|
Wanli Liu
|
Donald C Comeau
|
Zhiyong Lu
|
W John Wilbur
Proceedings of the 15th Workshop on Biomedical Natural Language Processing
2015
pdf
bib
Summarizing Topical Contents from PubMed Documents Using a Thematic Analysis
Sun Kim
|
Lana Yeganova
|
W. John Wilbur
Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing
2012
pdf
bib
Classifying Gene Sentences in Biomedical Literature by Combining High-Precision Gene Identifiers
Sun Kim
|
Won Kim
|
Don Comeau
|
W. John Wilbur
BioNLP: Proceedings of the 2012 Workshop on Biomedical Natural Language Processing