2021
pdf
bib
Proceedings of the Second Workshop on Natural Language Processing for Medical Conversations
Chaitanya Shivade
|
Rashmi Gangadharaiah
|
Spandana Gella
|
Sandeep Konam
|
Shaoqing Yuan
|
Yi Zhang
|
Parminder Bhatia
|
Byron Wallace
Proceedings of the Second Workshop on Natural Language Processing for Medical Conversations
pdf
bib
abs
Extracting Appointment Spans from Medical Conversations
Nimshi Venkat Meripo
|
Sandeep Konam
Proceedings of the Second Workshop on Natural Language Processing for Medical Conversations
Extracting structured information from medical conversations can reduce the documentation burden for doctors and help patients follow through with their care plan. In this paper, we introduce a novel task of extracting appointment spans from medical conversations. We frame this task as a sequence tagging problem and focus on extracting spans for appointment reason and time. However, annotating medical conversations is expensive, time-consuming, and requires considerable domain expertise. Hence, we propose to leverage weak supervision approaches, namely incomplete supervision, inaccurate supervision, and a hybrid supervision approach and evaluate both generic and domain-specific, ELMo, and BERT embeddings using sequence tagging models. The best performing model is the domain-specific BERT variant using weak hybrid supervision and obtains an F1 score of 79.32.
2020
pdf
bib
abs
Weakly Supervised Medication Regimen Extraction from Medical Conversations
Dhruvesh Patel
|
Sandeep Konam
|
Sai Prabhakar
Proceedings of the 3rd Clinical Natural Language Processing Workshop
Automated Medication Regimen (MR) extraction from medical conversations can not only improve recall and help patients follow through with their care plan, but also reduce the documentation burden for doctors. In this paper, we focus on extracting spans for frequency, route and change, corresponding to medications discussed in the conversation. We first describe a unique dataset of annotated doctor-patient conversations and then present a weakly supervised model architecture that can perform span extraction using noisy classification data. The model utilizes an attention bottleneck inside a classification model to perform the extraction. We experiment with several variants of attention scoring and projection functions and propose a novel transformer-based attention scoring function (TAScore). The proposed combination of TAScore and Fusedmax projection achieves a 10 point increase in Longest Common Substring F1 compared to the baseline of additive scoring plus softmax projection.
pdf
bib
abs
Towards Understanding ASR Error Correction for Medical Conversations
Anirudh Mani
|
Shruti Palaskar
|
Sandeep Konam
Proceedings of the First Workshop on Natural Language Processing for Medical Conversations
Domain Adaptation for Automatic Speech Recognition (ASR) error correction via machine translation is a useful technique for improving out-of-domain outputs of pre-trained ASR systems to obtain optimal results for specific in-domain tasks. We use this technique on our dataset of Doctor-Patient conversations using two off-the-shelf ASR systems: Google ASR (commercial) and the ASPIRE model (open-source). We train a Sequence-to-Sequence Machine Translation model and evaluate it on seven specific UMLS Semantic types, including Pharmacological Substance, Sign or Symptom, and Diagnostic Procedure to name a few. Lastly, we breakdown, analyze and discuss the 7% overall improvement in word error rate in view of each Semantic type.