%0 Conference Proceedings %T Contextual Fine-to-Coarse Distillation for Coarse-grained Response Selection in Open-Domain Conversations %A Chen, Wei %A Gong, Yeyun %A Xu, Can %A Hu, Huang %A Yao, Bolun %A Wei, Zhongyu %A Fan, Zhihao %A Hu, Xiaowu %A Zhou, Bartuer %A Cheng, Biao %A Jiang, Daxin %A Duan, Nan %Y Muresan, Smaranda %Y Nakov, Preslav %Y Villavicencio, Aline %S Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) %D 2022 %8 May %I Association for Computational Linguistics %C Dublin, Ireland %F chen-etal-2022-contextual %X We study the problem of coarse-grained response selection in retrieval-based dialogue systems. The problem is equally important with fine-grained response selection, but is less explored in existing literature. In this paper, we propose a Contextual Fine-to-Coarse (CFC) distilled model for coarse-grained response selection in open-domain conversations. In our CFC model, dense representations of query, candidate contexts and responses is learned based on the multi-tower architecture using contextual matching, and richer knowledge learned from the one-tower architecture (fine-grained) is distilled into the multi-tower architecture (coarse-grained) to enhance the performance of the retriever. To evaluate the performance of the proposed model, we construct two new datasets based on the Reddit comments dump and Twitter corpus. Extensive experimental results on the two datasets show that the proposed method achieves huge improvement over all evaluation metrics compared with traditional baseline methods. %R 10.18653/v1/2022.acl-long.334 %U https://aclanthology.org/2022.acl-long.334 %U https://doi.org/10.18653/v1/2022.acl-long.334 %P 4865-4877