%0 Conference Proceedings %T When can I Speak? Predicting initiation points for spoken dialogue agents %A Li, Siyan %A Paranjape, Ashwin %A Manning, Christopher %Y Lemon, Oliver %Y Hakkani-Tur, Dilek %Y Li, Junyi Jessy %Y Ashrafzadeh, Arash %Y Garcia, Daniel Hernández %Y Alikhani, Malihe %Y Vandyke, David %Y Dušek, Ondřej %S Proceedings of the 23rd Annual Meeting of the Special Interest Group on Discourse and Dialogue %D 2022 %8 September %I Association for Computational Linguistics %C Edinburgh, UK %F li-etal-2022-speak %X Current spoken dialogue systems initiate their turns after a long period of silence (700-1000ms), which leads to little real-time feedback, sluggish responses, and an overall stilted conversational flow. Humans typically respond within 200ms and successfully predicting initiation points in advance would allow spoken dialogue agents to do the same. In this work, we predict the lead-time to initiation using prosodic features from a pre-trained speech representation model (wav2vec 1.0) operating on user audio and word features from a pre-trained language model (GPT-2) operating on incremental transcriptions. To evaluate errors, we propose two metrics w.r.t. predicted and true lead times. We train and evaluate the models on the Switchboard Corpus and find that our method outperforms features from prior work on both metrics and vastly outperforms the common approach of waiting for 700ms of silence. %R 10.18653/v1/2022.sigdial-1.22 %U https://aclanthology.org/2022.sigdial-1.22 %U https://doi.org/10.18653/v1/2022.sigdial-1.22 %P 217-224