Colin Grambow
2022
In-Domain Pre-Training Improves Clinical Note Generation from Doctor-Patient Conversations
Colin Grambow
|
Longxiang Zhang
|
Thomas Schaaf
Proceedings of the First Workshop on Natural Language Generation in Healthcare
Summarization of doctor-patient conversations into clinical notes by medical scribes is an essential process for effective clinical care. Pre-trained transformer models have shown a great amount of success in this area, but the domain shift from standard NLP tasks to the medical domain continues to present challenges. We build upon several recent works to show that additional pre-training with in-domain medical conversations leads to performance gains for clinical summarization. In addition to conventional evaluation metrics, we also explore a clinical named entity recognition model for concept-based evaluation. Finally, we contrast long-sequence transformers with a common transformer model, BART. Overall, our findings corroborate research in non-medical domains and suggest that in-domain pre-training combined with transformers for long sequences are effective strategies for summarizing clinical encounters.
Search