%0 Conference Proceedings %T Evaluation of HMM-based Models for the Annotation of Unsegmented Dialogue Turns %A Martínez-Hinarejos, Carlos-D. %A Tamarit, Vicent %A Benedí, José-M. %Y Calzolari, Nicoletta %Y Choukri, Khalid %Y Maegaard, Bente %Y Mariani, Joseph %Y Odijk, Jan %Y Piperidis, Stelios %Y Rosner, Mike %Y Tapias, Daniel %S Proceedings of the Seventh International Conference on Language Resources and Evaluation (LREC’10) %D 2010 %8 May %I European Language Resources Association (ELRA) %C Valletta, Malta %F martinez-hinarejos-etal-2010-evaluation %X Corpus-based dialogue systems rely on statistical models, whose parameters are inferred from annotated dialogues. The dialogues are usually annotated in terms of Dialogue Acts (DA), and the manual annotation is difficult (as annotation rule are hard to define), error-prone and time-consuming. Therefore, several semi-automatic annotation processes have been proposed to speed-up the process and consequently obtain a dialogue system in less total time. These processes are usually based on statistical models. The standard statistical annotation model is based on Hidden Markov Models (HMM). In this work, we explore the impact of different types of HMM, with different number of states, on annotation accuracy. We performed experiments using these models on two dialogue corpora (Dihana and SwitchBoard) of dissimilar features. The results show that some types of models improve standard HMM in a human-computer task-oriented dialogue corpus (Dihana corpus), but their impact is lower in a human-human non-task-oriented dialogue corpus (SwitchBoard corpus). %U http://www.lrec-conf.org/proceedings/lrec2010/pdf/303_Paper.pdf