%0 Conference Proceedings %T How “open” are the conversations with open-domain chatbots? A proposal for Speech Event based evaluation %A Doğruöz, A. Seza %A Skantze, Gabriel %Y Li, Haizhou %Y Levow, Gina-Anne %Y Yu, Zhou %Y Gupta, Chitralekha %Y Sisman, Berrak %Y Cai, Siqi %Y Vandyke, David %Y Dethlefs, Nina %Y Wu, Yan %Y Li, Junyi Jessy %S Proceedings of the 22nd Annual Meeting of the Special Interest Group on Discourse and Dialogue %D 2021 %8 July %I Association for Computational Linguistics %C Singapore and Online %F dogruoz-skantze-2021-open %X Open-domain chatbots are supposed to converse freely with humans without being restricted to a topic, task or domain. However, the boundaries and/or contents of open-domain conversations are not clear. To clarify the boundaries of “openness”, we conduct two studies: First, we classify the types of “speech events” encountered in a chatbot evaluation data set (i.e., Meena by Google) and find that these conversations mainly cover the “small talk” category and exclude the other speech event categories encountered in real life human-human communication. Second, we conduct a small-scale pilot study to generate online conversations covering a wider range of speech event categories between two humans vs. a human and a state-of-the-art chatbot (i.e., Blender by Facebook). A human evaluation of these generated conversations indicates a preference for human-human conversations, since the human-chatbot conversations lack coherence in most speech event categories. Based on these results, we suggest (a) using the term “small talk” instead of “open-domain” for the current chatbots which are not that “open” in terms of conversational abilities yet, and (b) revising the evaluation methods to test the chatbot conversations against other speech events. %R 10.18653/v1/2021.sigdial-1.41 %U https://aclanthology.org/2021.sigdial-1.41 %U https://doi.org/10.18653/v1/2021.sigdial-1.41 %P 392-402