Panagiotis Papadakos


2025

pdf bib
Evaluating LLMs on Deceptive Text across Cultures
Katerina Papantoniou | Panagiotis Papadakos | Dimitris Plexousakis
Proceedings of the 15th International Conference on Recent Advances in Natural Language Processing - Natural Language Processing in the Generative AI Era

Deception is a pervasive feature of human communication, yet identifying linguistic cues of deception remains a challenging task due to strong context dependency across domains, cultures, and types of deception. While prior work has relied on human analysis across disciplines like social psychology, philosophy, and political science, large language models (LLMs) offer a new avenue for exploring deception due to their strong performance in Natural Language Processing (NLP) tasks. In this study, we investigate whether open-weight LLMs possess and can apply knowledge about linguistic markers of deception across multiple languages, domains, and cultural contexts, with language and country of origin used as a proxy for culture. We focus on two domains, opinionated reviews and personal descriptions about sensitive topics, spanning five languages and six cultural settings. Using various configurations (zero-shot, one-shot, and fine-tuning), we evaluate the performance of LLMs in detecting and generating deceptive text. In detection tasks, our results reveal cross-model and cross-context performance differences. In generation tasks, linguistic analyses show partial alignment with known deception cues in human text, though this knowledge appears largely uniform and context-agnostic.

2021

pdf bib
Linguistic Cues of Deception in a Multilingual April Fools’ Day Context
Katerina Papantoniou | Panagiotis Papadakos | Giorgos Flouris | Dimitris Plexousakis
Proceedings of the Eighth Italian Conference on Computational Linguistics (CLiC-it 2021)

2018

pdf bib
Spoken Dialogue for Information Navigation
Alexandros Papangelis | Panagiotis Papadakos | Yannis Stylianou | Yannis Tzitzikas
Proceedings of the 19th Annual SIGdial Meeting on Discourse and Dialogue

Aiming to expand the current research paradigm for training conversational AI agents that can address real-world challenges, we take a step away from traditional slot-filling goal-oriented spoken dialogue systems (SDS) and model the dialogue in a way that allows users to be more expressive in describing their needs. The goal is to help users make informed decisions rather than being fed matching items. To this end, we describe the Linked-Data SDS (LD-SDS), a system that exploits semantic knowledge bases that connect to linked data, and supports complex constraints and preferences. We describe the required changes in language understanding and state tracking, and the need for mined features, and we report the promising results (in terms of semantic errors, effort, etc) of a preliminary evaluation after training two statistical dialogue managers in various conditions.