Anku Rani


2023

pdf bib
FACTIFY3M: A benchmark for multimodal fact verification with explainability through 5W Question-Answering
Megha Chakraborty | Khushbu Pahwa | Anku Rani | Shreyas Chatterjee | Dwip Dalal | Harshit Dave | Ritvik G | Preethi Gurumurthy | Adarsh Mahor | Samahriti Mukherjee | Aditya Pakala | Ishan Paul | Janvita Reddy | Arghya Sarkar | Kinjal Sensharma | Aman Chadha | Amit Sheth | Amitava Das
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing

Combating disinformation is one of the burning societal crises - about 67% of the American population believes that disinformation produces a lot of uncertainty, and 10% of them knowingly propagate disinformation. Evidence shows that disinformation can manipulate democratic processes and public opinion, causing disruption in the share market, panic and anxiety in society, and even death during crises. Therefore, disinformation should be identified promptly and, if possible, mitigated. With approximately 3.2 billion images and 720,000 hours of video shared online daily on social media platforms, scalable detection of multimodal disinformation requires efficient fact verification. Despite progress in automatic text-based fact verification (e.g., FEVER, LIAR), the research community lacks substantial effort in multimodal fact verification. To address this gap, we introduce FACTIFY 3M, a dataset of 3 million samples that pushes the boundaries of the domain of fact verification via a multimodal fake news dataset, in addition to offering explainability through the concept of 5W question-answering. Salient features of the dataset include: (i) textual claims, (ii) ChatGPT-generated paraphrased claims, (iii) associated images, (iv) stable diffusion-generated additional images (i.e., visual paraphrases), (v) pixel-level image heatmap to foster image-text explainability of the claim, (vi) 5W QA pairs, and (vii) adversarial fake news stories.

pdf bib
FACTIFY-5WQA: 5W Aspect-based Fact Verification through Question Answering
Anku Rani | S.M Towhidul Islam Tonmoy | Dwip Dalal | Shreya Gautam | Megha Chakraborty | Aman Chadha | Amit Sheth | Amitava Das
Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

Automatic fact verification has received significant attention recently. Contemporary automatic fact-checking systems focus on estimating truthfulness using numerical scores which are not human-interpretable. A human fact-checker generally follows several logical steps to verify a verisimilitude claim and conclude whether it’s truthful or a mere masquerade. Popular fact-checking websites follow a common structure for fact categorization such as half true, half false, false, pants on fire, etc. Therefore, it is necessary to have an aspect-based (delineating which part(s) are true and which are false) explainable system that can assist human fact-checkers in asking relevant questions related to a fact, which can then be validated separately to reach a final verdict. In this paper, we propose a 5W framework (who, what, when, where, and why) for question-answer-based fact explainability. To that end, we present a semi-automatically generated dataset called FACTIFY-5WQA, which consists of 391, 041 facts along with relevant 5W QAs – underscoring our major contribution to this paper. A semantic role labeling system has been utilized to locate 5Ws, which generates QA pairs for claims using a masked language model. Finally, we report a baseline QA system to automatically locate those answers from evidence documents, which can serve as a baseline for future research in the field. Lastly, we propose a robust fact verification system that takes paraphrased claims and automatically validates them. The dataset and the baseline model are available at https: //github.com/ankuranii/acl-5W-QA

2021

pdf bib
Use of Formal Ethical Reviews in NLP Literature: Historical Trends and Current Practices
Sebastin Santy | Anku Rani | Monojit Choudhury
Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021