Extractive and Abstractive Explanations for Fact-Checking and Evaluation of News

Ashkan Kazemi, Zehua Li, Verónica Pérez-Rosas, Rada Mihalcea


Abstract
In this paper, we explore the construction of natural language explanations for news claims, with the goal of assisting fact-checking and news evaluation applications. We experiment with two methods: (1) an extractive method based on Biased TextRank – a resource-effective unsupervised graph-based algorithm for content extraction; and (2) an abstractive method based on the GPT-2 language model. We perform comparative evaluations on two misinformation datasets in the political and health news domains, and find that the extractive method shows the most promise.
Anthology ID:
2021.nlp4if-1.7
Volume:
Proceedings of the Fourth Workshop on NLP for Internet Freedom: Censorship, Disinformation, and Propaganda
Month:
June
Year:
2021
Address:
Online
Editors:
Anna Feldman, Giovanni Da San Martino, Chris Leberknight, Preslav Nakov
Venue:
NLP4IF
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
45–50
Language:
URL:
https://aclanthology.org/2021.nlp4if-1.7
DOI:
10.18653/v1/2021.nlp4if-1.7
Bibkey:
Cite (ACL):
Ashkan Kazemi, Zehua Li, Verónica Pérez-Rosas, and Rada Mihalcea. 2021. Extractive and Abstractive Explanations for Fact-Checking and Evaluation of News. In Proceedings of the Fourth Workshop on NLP for Internet Freedom: Censorship, Disinformation, and Propaganda, pages 45–50, Online. Association for Computational Linguistics.
Cite (Informal):
Extractive and Abstractive Explanations for Fact-Checking and Evaluation of News (Kazemi et al., NLP4IF 2021)
Copy Citation:
PDF:
https://aclanthology.org/2021.nlp4if-1.7.pdf