%0 Conference Proceedings %T Few-shot fine-tuning SOTA summarization models for medical dialogues %A Navarro, David Fraile %A Dras, Mark %A Berkovsky, Shlomo %Y Ippolito, Daphne %Y Li, Liunian Harold %Y Pacheco, Maria Leonor %Y Chen, Danqi %Y Xue, Nianwen %S Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: Student Research Workshop %D 2022 %8 July %I Association for Computational Linguistics %C Hybrid: Seattle, Washington + Online %F navarro-etal-2022-shot %X Abstractive summarization of medical dialogues presents a challenge for standard training approaches, given the paucity of suitable datasets. We explore the performance of state-of-the-art models with zero-shot and few-shot learning strategies and measure the impact of pretraining with general domain and dialogue-specific text on the summarization performance. %R 10.18653/v1/2022.naacl-srw.32 %U https://aclanthology.org/2022.naacl-srw.32 %U https://doi.org/10.18653/v1/2022.naacl-srw.32 %P 254-266