Gautam Naik
2024
A Continued Pretrained LLM Approach for Automatic Medical Note Generation
Dong Yuan
|
Eti Rastogi
|
Gautam Naik
|
Sree Prasanna Rajagopal
|
Sagar Goyal
|
Fen Zhao
|
Bharath Chintagunta
|
Jeffrey Ward
Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 2: Short Papers)
LLMs are revolutionizing NLP tasks. However, the use of the most advanced LLMs, such as GPT-4, is often prohibitively expensive for most specialized fields. We introduce HEAL, the first continuously trained 13B LLaMA2-based LLM that is purpose-built for medical conversations and measured on automated scribing. Our results demonstrate that HEAL outperforms GPT-4 and PMC-LLaMA in PubMedQA, with an accuracy of 78.4%. It also achieves parity with GPT-4 in generating medical notes. Remarkably, HEAL surpasses GPT-4 and Med-PaLM 2 in identifying more correct medical concepts and exceeds the performance of human scribes and other comparable models in correctness and completeness.
2019
MELD: A Multimodal Multi-Party Dataset for Emotion Recognition in Conversations
Soujanya Poria
|
Devamanyu Hazarika
|
Navonil Majumder
|
Gautam Naik
|
Erik Cambria
|
Rada Mihalcea
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics
Emotion recognition in conversations is a challenging task that has recently gained popularity due to its potential applications. Until now, however, a large-scale multimodal multi-party emotional conversational database containing more than two speakers per dialogue was missing. Thus, we propose the Multimodal EmotionLines Dataset (MELD), an extension and enhancement of EmotionLines. MELD contains about 13,000 utterances from 1,433 dialogues from the TV-series Friends. Each utterance is annotated with emotion and sentiment labels, and encompasses audio, visual and textual modalities. We propose several strong multimodal baselines and show the importance of contextual and multimodal information for emotion recognition in conversations. The full dataset is available for use at http://affective-meld.github.io.
Search
Co-authors
- Dong Yuan 1
- Eti Rastogi 1
- Sree Prasanna Rajagopal 1
- Sagar Goyal 1
- Fen Zhao 1
- show all...