Exploring Deceptive Domain Transfer Strategies: Mitigating the Differences among Deceptive Domains

Sadat Shahriar, Arjun Mukherjee, Omprakash Gnawali


Abstract
Deceptive text poses a significant threat to users, resulting in widespread misinformation and disorder. While researchers have created numerous cutting-edge techniques for detecting deception in domain-specific settings, whether there is a generic deception pattern so that deception-related knowledge in one domain can be transferred to the other remains mostly unexplored. Moreover, the disparities in textual expression across these many mediums pose an additional obstacle for generalization. To this end, we present a Multi-Task Learning (MTL)-based deception generalization strategy to reduce the domain-specific noise and facilitate a better understanding of deception via a generalized training. As deceptive domains, we use News (fake news), Tweets (rumors), and Reviews (fake reviews) and employ LSTM and BERT model to incorporate domain transfer techniques. Our proposed architecture for the combined approach of domain-independent and domain-specific training improves the deception detection performance by up to 5.28% in F1-score.
Anthology ID:
2023.ranlp-1.115
Volume:
Proceedings of the 14th International Conference on Recent Advances in Natural Language Processing
Month:
September
Year:
2023
Address:
Varna, Bulgaria
Editors:
Ruslan Mitkov, Galia Angelova
Venue:
RANLP
SIG:
Publisher:
INCOMA Ltd., Shoumen, Bulgaria
Note:
Pages:
1076–1084
Language:
URL:
https://aclanthology.org/2023.ranlp-1.115
DOI:
Bibkey:
Cite (ACL):
Sadat Shahriar, Arjun Mukherjee, and Omprakash Gnawali. 2023. Exploring Deceptive Domain Transfer Strategies: Mitigating the Differences among Deceptive Domains. In Proceedings of the 14th International Conference on Recent Advances in Natural Language Processing, pages 1076–1084, Varna, Bulgaria. INCOMA Ltd., Shoumen, Bulgaria.
Cite (Informal):
Exploring Deceptive Domain Transfer Strategies: Mitigating the Differences among Deceptive Domains (Shahriar et al., RANLP 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.ranlp-1.115.pdf