BERTective: Language Models and Contextual Information for Deception Detection

Tommaso Fornaciari, Federico Bianchi, Massimo Poesio, Dirk Hovy


Abstract
Spotting a lie is challenging but has an enormous potential impact on security as well as private and public safety. Several NLP methods have been proposed to classify texts as truthful or deceptive. In most cases, however, the target texts’ preceding context is not considered. This is a severe limitation, as any communication takes place in context, not in a vacuum, and context can help to detect deception. We study a corpus of Italian dialogues containing deceptive statements and implement deep neural models that incorporate various linguistic contexts. We establish a new state-of-the-art identifying deception and find that not all context is equally useful to the task. Only the texts closest to the target, if from the same speaker (rather than questions by an interlocutor), boost performance. We also find that the semantic information in language models such as BERT contributes to the performance. However, BERT alone does not capture the implicit knowledge of deception cues: its contribution is conditional on the concurrent use of attention to learn cues from BERT’s representations.
Anthology ID:
2021.eacl-main.232
Volume:
Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume
Month:
April
Year:
2021
Address:
Online
Editors:
Paola Merlo, Jorg Tiedemann, Reut Tsarfaty
Venue:
EACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
2699–2708
Language:
URL:
https://aclanthology.org/2021.eacl-main.232
DOI:
10.18653/v1/2021.eacl-main.232
Bibkey:
Cite (ACL):
Tommaso Fornaciari, Federico Bianchi, Massimo Poesio, and Dirk Hovy. 2021. BERTective: Language Models and Contextual Information for Deception Detection. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 2699–2708, Online. Association for Computational Linguistics.
Cite (Informal):
BERTective: Language Models and Contextual Information for Deception Detection (Fornaciari et al., EACL 2021)
Copy Citation:
PDF:
https://aclanthology.org/2021.eacl-main.232.pdf
Dataset:
 2021.eacl-main.232.Dataset.zip
Software:
 2021.eacl-main.232.Software.zip