Muyang Zhou


2026

As Large Language Models(LLMs) increasingly power chatbots, social media, and other interactive platforms, the ability to detect AI in conversational settings is critical for ensuring transparency and preventing potential misuse. However, existing detection methods focus on static, document-level content, overlooking the dynamic nature of dialogues. To address this, we propose an utterance-level detection framework, which integrates features from individual and combined analysis of dialogue participants’ responses to detect LLM-generated text under conversational setting. Leveraging a transformer-based recurrent architecture and a curated dataset of human-human, human-LLM, and LLM-LLM dialogues, this framework achieves an accuracy of 98.14% with high inference speed, supported by extensive results of experiments on different models and settings. This work provides an effective solution for detecting LLM-generated text in real-time conversations, promoting transparency, and mitigating risks of misuse.