2018
pdf
bib
abs
ClaimRank: Detecting Check-Worthy Claims in Arabic and English
Israa Jaradat
|
Pepa Gencheva
|
Alberto Barrón-Cedeño
|
Lluís Màrquez
|
Preslav Nakov
Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Demonstrations
We present ClaimRank, an online system for detecting check-worthy claims. While originally trained on political debates, the system can work for any kind of text, e.g., interviews or just regular news articles. Its aim is to facilitate manual fact-checking efforts by prioritizing the claims that fact-checkers should consider first. ClaimRank supports both Arabic and English, it is trained on actual annotations from nine reputable fact-checking organizations (PolitiFact, FactCheck, ABC, CNN, NPR, NYT, Chicago Tribune, The Guardian, and Washington Post), and thus it can mimic the claim selection strategies for each and any of them, as well as for the union of them all.
2017
pdf
bib
abs
A Context-Aware Approach for Detecting Worth-Checking Claims in Political Debates
Pepa Gencheva
|
Preslav Nakov
|
Lluís Màrquez
|
Alberto Barrón-Cedeño
|
Ivan Koychev
Proceedings of the International Conference Recent Advances in Natural Language Processing, RANLP 2017
In the context of investigative journalism, we address the problem of automatically identifying which claims in a given document are most worthy and should be prioritized for fact-checking. Despite its importance, this is a relatively understudied problem. Thus, we create a new corpus of political debates, containing statements that have been fact-checked by nine reputable sources, and we train machine learning models to predict which claims should be prioritized for fact-checking, i.e., we model the problem as a ranking task. Unlike previous work, which has looked primarily at sentences in isolation, in this paper we focus on a rich input representation modeling the context: relationship between the target statement and the larger context of the debate, interaction between the opponents, and reaction by the moderator and by the public. Our experiments show state-of-the-art results, outperforming a strong rivaling system by a margin, while also confirming the importance of the contextual information.
pdf
bib
abs
We Built a Fake News / Click Bait Filter: What Happened Next Will Blow Your Mind!
Georgi Karadzhov
|
Pepa Gencheva
|
Preslav Nakov
|
Ivan Koychev
Proceedings of the International Conference Recent Advances in Natural Language Processing, RANLP 2017
It is completely amazing! Fake news and “click baits” have totally invaded the cyberspace. Let us face it: everybody hates them for three simple reasons. Reason #2 will absolutely amaze you. What these can achieve at the time of election will completely blow your mind! Now, we all agree, this cannot go on, you know, somebody has to stop it. So, we did this research, and trust us, it is totally great research, it really is! Make no mistake. This is the best research ever! Seriously, come have a look, we have it all: neural networks, attention mechanism, sentiment lexicons, author profiling, you name it. Lexical features, semantic features, we absolutely have it all. And we have totally tested it, trust us! We have results, and numbers, really big numbers. The best numbers ever! Oh, and analysis, absolutely top notch analysis. Interested? Come read the shocking truth about fake news and clickbait in the Bulgarian cyberspace. You won’t believe what we have found!
bib
Proceedings of the Student Research Workshop Associated with RANLP 2017
Venelin Kovatchev
|
Irina Temnikova
|
Pepa Gencheva
|
Yasen Kiprov
|
Ivelina Nikolova
Proceedings of the Student Research Workshop Associated with RANLP 2017
2016
pdf
bib
SUper Team at SemEval-2016 Task 3: Building a Feature-Rich System for Community Question Answering
Tsvetomila Mihaylova
|
Pepa Gencheva
|
Martin Boyanov
|
Ivana Yovcheva
|
Todor Mihaylov
|
Momchil Hardalov
|
Yasen Kiprov
|
Daniel Balchev
|
Ivan Koychev
|
Preslav Nakov
|
Ivelina Nikolova
|
Galia Angelova
Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016)