Liwen Xu
2022
Building a Clinically-Focused Problem List From Medical Notes
Amir Feder
|
Itay Laish
|
Shashank Agarwal
|
Uri Lerner
|
Avel Atias
|
Cathy Cheung
|
Peter Clardy
|
Alon Peled-Cohen
|
Rachana Fellinger
|
Hengrui Liu
|
Lan Huong Nguyen
|
Birju Patel
|
Natan Potikha
|
Amir Taubenfeld
|
Liwen Xu
|
Seung Doo Yang
|
Ayelet Benjamini
|
Avinatan Hassidim
Proceedings of the 13th International Workshop on Health Text Mining and Information Analysis (LOUHI)
Clinical notes often contain useful information not documented in structured data, but their unstructured nature can lead to critical patient-related information being missed. To increase the likelihood that this valuable information is utilized for patient care, algorithms that summarize notes into a problem list have been proposed. Focused on identifying medically-relevant entities in the free-form text, these solutions are often detached from a canonical ontology and do not allow downstream use of the detected text-spans. Mitigating these issues, we present here a system for generating a canonical problem list from medical notes, consisting of two major stages. At the first stage, annotation, we use a transformer model to detect all clinical conditions which are mentioned in a single note. These clinical conditions are then grounded to a predefined ontology, and are linked to spans in the text. At the second stage, summarization, we develop a novel algorithm that aggregates over the set of clinical conditions detected on all of the patient’s notes, and produce a concise patient summary that organizes their most important conditions.
2021
ChicHealth @ MEDIQA 2021: Exploring the limits of pre-trained seq2seq models for medical summarization
Liwen Xu
|
Yan Zhang
|
Lei Hong
|
Yi Cai
|
Szui Sung
Proceedings of the 20th Workshop on Biomedical Language Processing
In this article, we will describe our system for MEDIQA2021 shared tasks. First, we will describe the method of the second task, multiple answer summary (MAS). For extracting abstracts, we follow the rules of (CITATION). First, the candidate sentences are roughly estimated by using the Roberta model. Then the Markov chain model is used to evaluate the sentences in a fine-grained manner. Our team won the first place in overall performance, with the fourth place in MAS task, the seventh place in RRS task and the eleventh place in QS task. For the QS and RRS tasks, we investigate the performanceS of the end-to-end pre-trained seq2seq model. Experiments show that the methods of adversarial training and reverse translation are beneficial to improve the fine tuning performance.