Assessing the Verifiability of Attributions in News Text

Edward Newell, Ariane Schang, Drew Margolin, Derek Ruths


Abstract
When reporting the news, journalists rely on the statements of stakeholders, experts, and officials. The attribution of such a statement is verifiable if its fidelity to the source can be confirmed or denied. In this paper, we develop a new NLP task: determining the verifiability of an attribution based on linguistic cues. We operationalize the notion of verifiability as a score between 0 and 1 using human judgments in a comparison-based approach. Using crowdsourcing, we create a dataset of verifiability-scored attributions, and demonstrate a model that achieves an RMSE of 0.057 and Spearman’s rank correlation of 0.95 to human-generated scores. We discuss the application of this technique to the analysis of mass media.
Anthology ID:
I17-1076
Volume:
Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 1: Long Papers)
Month:
November
Year:
2017
Address:
Taipei, Taiwan
Venue:
IJCNLP
SIG:
Publisher:
Asian Federation of Natural Language Processing
Note:
Pages:
754–763
Language:
URL:
https://aclanthology.org/I17-1076
DOI:
Bibkey:
Cite (ACL):
Edward Newell, Ariane Schang, Drew Margolin, and Derek Ruths. 2017. Assessing the Verifiability of Attributions in News Text. In Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 754–763, Taipei, Taiwan. Asian Federation of Natural Language Processing.
Cite (Informal):
Assessing the Verifiability of Attributions in News Text (Newell et al., IJCNLP 2017)
Copy Citation:
PDF:
https://aclanthology.org/I17-1076.pdf
Dataset:
 I17-1076.Datasets.zip