Mirror Minds : An Empirical Study on Detecting LLM-Generated Text via LLMs

Josh Baradia, Shubham Gupta, Suman Kundu


Abstract
The use of large language models (LLMs) is inevitable in text generation. LLMs are intelligent and slowly replacing the search engines. LLMs became the de facto choice for conversation, knowledge extraction, and brain storming. This study focuses on a question: ‘Can we utilize the generative capabilities of LLMs to detect AI-generated content?’ We present a methodology and empirical results on four publicly available data sets. The result shows, with 90% accuracy it is possible to detect AI-generated content by a zero-shot detector utilizing multiple LLMs.
Anthology ID:
2025.genaidetect-1.3
Volume:
Proceedings of the 1stWorkshop on GenAI Content Detection (GenAIDetect)
Month:
January
Year:
2025
Address:
Abu Dhabi, UAE
Editors:
Firoj Alam, Preslav Nakov, Nizar Habash, Iryna Gurevych, Shammur Chowdhury, Artem Shelmanov, Yuxia Wang, Ekaterina Artemova, Mucahid Kutlu, George Mikros
Venues:
GenAIDetect | WS
SIG:
Publisher:
International Conference on Computational Linguistics
Note:
Pages:
59–67
Language:
URL:
https://aclanthology.org/2025.genaidetect-1.3/
DOI:
Bibkey:
Cite (ACL):
Josh Baradia, Shubham Gupta, and Suman Kundu. 2025. Mirror Minds : An Empirical Study on Detecting LLM-Generated Text via LLMs. In Proceedings of the 1stWorkshop on GenAI Content Detection (GenAIDetect), pages 59–67, Abu Dhabi, UAE. International Conference on Computational Linguistics.
Cite (Informal):
Mirror Minds : An Empirical Study on Detecting LLM-Generated Text via LLMs (Baradia et al., GenAIDetect 2025)
Copy Citation:
PDF:
https://aclanthology.org/2025.genaidetect-1.3.pdf