%0 Conference Proceedings %T Capturing Speaker Incorrectness: Speaker-Focused Post-Correction for Abstractive Dialogue Summarization %A Lee, Dongyub %A Lim, Jungwoo %A Whang, Taesun %A Lee, Chanhee %A Cho, Seungwoo %A Park, Mingun %A Lim, Heuiseok %Y Carenini, Giuseppe %Y Cheung, Jackie Chi Kit %Y Dong, Yue %Y Liu, Fei %Y Wang, Lu %S Proceedings of the Third Workshop on New Frontiers in Summarization %D 2021 %8 November %I Association for Computational Linguistics %C Online and in Dominican Republic %F lee-etal-2021-capturing %X In this paper, we focus on improving the quality of the summary generated by neural abstractive dialogue summarization systems. Even though pre-trained language models generate well-constructed and promising results, it is still challenging to summarize the conversation of multiple participants since the summary should include a description of the overall situation and the actions of each speaker. This paper proposes self-supervised strategies for speaker-focused post-correction in abstractive dialogue summarization. Specifically, our model first discriminates which type of speaker correction is required in a draft summary and then generates a revised summary according to the required type. Experimental results show that our proposed method adequately corrects the draft summaries, and the revised summaries are significantly improved in both quantitative and qualitative evaluations. %R 10.18653/v1/2021.newsum-1.8 %U https://aclanthology.org/2021.newsum-1.8 %U https://doi.org/10.18653/v1/2021.newsum-1.8 %P 65-73