%0 Conference Proceedings %T SemEval-2019 Task 7: RumourEval, Determining Rumour Veracity and Support for Rumours %A Gorrell, Genevieve %A Kochkina, Elena %A Liakata, Maria %A Aker, Ahmet %A Zubiaga, Arkaitz %A Bontcheva, Kalina %A Derczynski, Leon %Y May, Jonathan %Y Shutova, Ekaterina %Y Herbelot, Aurelie %Y Zhu, Xiaodan %Y Apidianaki, Marianna %Y Mohammad, Saif M. %S Proceedings of the 13th International Workshop on Semantic Evaluation %D 2019 %8 June %I Association for Computational Linguistics %C Minneapolis, Minnesota, USA %F gorrell-etal-2019-semeval %X Since the first RumourEval shared task in 2017, interest in automated claim validation has greatly increased, as the danger of “fake news” has become a mainstream concern. However automated support for rumour verification remains in its infancy. It is therefore important that a shared task in this area continues to provide a focus for effort, which is likely to increase. Rumour verification is characterised by the need to consider evolving conversations and news updates to reach a verdict on a rumour’s veracity. As in RumourEval 2017 we provided a dataset of dubious posts and ensuing conversations in social media, annotated both for stance and veracity. The social media rumours stem from a variety of breaking news stories and the dataset is expanded to include Reddit as well as new Twitter posts. There were two concrete tasks; rumour stance prediction and rumour verification, which we present in detail along with results achieved by participants. We received 22 system submissions (a 70% increase from RumourEval 2017) many of which used state-of-the-art methodology to tackle the challenges involved. %R 10.18653/v1/S19-2147 %U https://aclanthology.org/S19-2147 %U https://doi.org/10.18653/v1/S19-2147 %P 845-854