Eti Rastogi
2024
A Continued Pretrained LLM Approach for Automatic Medical Note Generation
Dong Yuan
|
Eti Rastogi
|
Gautam Naik
|
Sree Prasanna Rajagopal
|
Sagar Goyal
|
Fen Zhao
|
Bharath Chintagunta
|
Jeffrey Ward
Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 2: Short Papers)
LLMs are revolutionizing NLP tasks. However, the use of the most advanced LLMs, such as GPT-4, is often prohibitively expensive for most specialized fields. We introduce HEAL, the first continuously trained 13B LLaMA2-based LLM that is purpose-built for medical conversations and measured on automated scribing. Our results demonstrate that HEAL outperforms GPT-4 and PMC-LLaMA in PubMedQA, with an accuracy of 78.4%. It also achieves parity with GPT-4 in generating medical notes. Remarkably, HEAL surpasses GPT-4 and Med-PaLM 2 in identifying more correct medical concepts and exceeds the performance of human scribes and other comparable models in correctness and completeness.
Search
Co-authors
- Dong Yuan 1
- Gautam Naik 1
- Sree Prasanna Rajagopal 1
- Sagar Goyal 1
- Fen Zhao 1
- show all...