Medically Aware GPT-3 as a Data Generator for Medical Dialogue Summarization

Bharath Chintagunta, Namit Katariya, Xavier Amatriain, Anitha Kannan


Abstract
In medical dialogue summarization, summaries must be coherent and must capture all the medically relevant information in the dialogue. However, learning effective models for summarization require large amounts of labeled data which is especially hard to obtain. We present an algorithm to create synthetic training data with an explicit focus on capturing medically relevant information. We utilize GPT-3 as the backbone of our algorithm and scale 210 human labeled examples to yield results comparable to using 6400 human labeled examples (~30x) leveraging low-shot learning and an ensemble method. In detailed experiments, we show that this approach produces high quality training data that can further be combined with human labeled data to get summaries that are strongly preferable to those produced by models trained on human data alone both in terms of medical accuracy and coherency.
Anthology ID:
2021.nlpmc-1.9
Volume:
Proceedings of the Second Workshop on Natural Language Processing for Medical Conversations
Month:
June
Year:
2021
Address:
Online
Venue:
NLPMC
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
66–76
Language:
URL:
https://aclanthology.org/2021.nlpmc-1.9
DOI:
10.18653/v1/2021.nlpmc-1.9
Bibkey:
Cite (ACL):
Bharath Chintagunta, Namit Katariya, Xavier Amatriain, and Anitha Kannan. 2021. Medically Aware GPT-3 as a Data Generator for Medical Dialogue Summarization. In Proceedings of the Second Workshop on Natural Language Processing for Medical Conversations, pages 66–76, Online. Association for Computational Linguistics.
Cite (Informal):
Medically Aware GPT-3 as a Data Generator for Medical Dialogue Summarization (Chintagunta et al., NLPMC 2021)
Copy Citation:
PDF:
https://aclanthology.org/2021.nlpmc-1.9.pdf
Video:
 https://aclanthology.org/2021.nlpmc-1.9.mp4
Data
CNN/Daily Mail