Improving Cross-domain, Cross-lingual and Multi-modal Deception Detection

Subhadarshi Panda, Sarah Ita Levitan


Abstract
With the increase of deception and misinformation especially in social media, it has become crucial to be able to develop machine learning methods to automatically identify deceptive language. In this proposal, we identify key challenges underlying deception detection in cross-domain, cross-lingual and multi-modal settings. To improve cross-domain deception classification, we propose to use inter-domain distance to identify a suitable source domain for a given target domain. We propose to study the efficacy of multilingual classification models vs translation for cross-lingual deception classification. Finally, we propose to better understand multi-modal deception detection and explore methods to weight and combine information from multiple modalities to improve multi-modal deception classification.
Anthology ID:
2022.acl-srw.30
Volume:
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics: Student Research Workshop
Month:
May
Year:
2022
Address:
Dublin, Ireland
Editors:
Samuel Louvan, Andrea Madotto, Brielen Madureira
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
383–390
Language:
URL:
https://aclanthology.org/2022.acl-srw.30
DOI:
10.18653/v1/2022.acl-srw.30
Bibkey:
Cite (ACL):
Subhadarshi Panda and Sarah Ita Levitan. 2022. Improving Cross-domain, Cross-lingual and Multi-modal Deception Detection. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics: Student Research Workshop, pages 383–390, Dublin, Ireland. Association for Computational Linguistics.
Cite (Informal):
Improving Cross-domain, Cross-lingual and Multi-modal Deception Detection (Panda & Levitan, ACL 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.acl-srw.30.pdf
Data
LIAR