A Domain-Independent Holistic Approach to Deception Detection

Sadat Shahriar, Arjun Mukherjee, Omprakash Gnawali


Abstract
The deception in the text can be of different forms in different domains, including fake news, rumor tweets, and spam emails. Irrespective of the domain, the main intent of the deceptive text is to deceit the reader. Although domain-specific deception detection exists, domain-independent deception detection can provide a holistic picture, which can be crucial to understand how deception occurs in the text. In this paper, we detect deception in a domain-independent setting using deep learning architectures. Our method outperforms the State-of-the-Art performance of most benchmark datasets with an overall accuracy of 93.42% and F1-Score of 93.22%. The domain-independent training allows us to capture subtler nuances of deceptive writing style. Furthermore, we analyze how much in-domain data may be helpful to accurately detect deception, especially for the cases where data may not be readily available to train. Our results and analysis indicate that there may be a universal pattern of deception lying in-between the text independent of the domain, which can create a novel area of research and open up new avenues in the field of deception detection.
Anthology ID:
2021.ranlp-1.147
Volume:
Proceedings of the International Conference on Recent Advances in Natural Language Processing (RANLP 2021)
Month:
September
Year:
2021
Address:
Held Online
Venue:
RANLP
SIG:
Publisher:
INCOMA Ltd.
Note:
Pages:
1308–1317
Language:
URL:
https://aclanthology.org/2021.ranlp-1.147
DOI:
Bibkey:
Cite (ACL):
Sadat Shahriar, Arjun Mukherjee, and Omprakash Gnawali. 2021. A Domain-Independent Holistic Approach to Deception Detection. In Proceedings of the International Conference on Recent Advances in Natural Language Processing (RANLP 2021), pages 1308–1317, Held Online. INCOMA Ltd..
Cite (Informal):
A Domain-Independent Holistic Approach to Deception Detection (Shahriar et al., RANLP 2021)
Copy Citation:
PDF:
https://aclanthology.org/2021.ranlp-1.147.pdf
Data
FakeNewsNetLIAR