Mucahid Kutlu


2025

pdf bib
Proceedings of the 1stWorkshop on GenAI Content Detection (GenAIDetect)
Firoj Alam | Preslav Nakov | Nizar Habash | Iryna Gurevych | Shammur Chowdhury | Artem Shelmanov | Yuxia Wang | Ekaterina Artemova | Mucahid Kutlu | George Mikros
Proceedings of the 1stWorkshop on GenAI Content Detection (GenAIDetect)

pdf bib
I Know You Did Not Write That! A Sampling Based Watermarking Method for Identifying Machine Generated Text
Kaan Efe Keleş | Ömer Kaan Gürbüz | Mucahid Kutlu
Proceedings of the 1stWorkshop on GenAI Content Detection (GenAIDetect)

Potential harms of Large Language Models such as mass misinformation and plagiarism can be partially mitigated if there exists a reliable way to detect machine generated text. In this paper, we propose a new watermarking method to detect machine-generated texts. Our method embeds a unique pattern within the generated text, ensuring that while the content remains coherent and natural to human readers, it carries distinct markers that can be identified algorithmically. Specifically, we intervene with the token sampling process in a way which enables us to trace back our token choices during the detection phase. We show how watermarking affects textual quality and compare our proposed method with a state-of-the-art watermarking method in terms of robustness and detectability. Through extensive experiments, we demonstrate the effectiveness of our watermarking scheme in distinguishing between watermarked and non-watermarked text, achieving high detection rates while maintaining textual quality.

pdf bib
TurQUaz at GenAI Detection Task 1:Dr. Perplexity or: How I Learned to Stop Worrying and Love the Finetuning
Kaan Efe Keleş | Mucahid Kutlu
Proceedings of the 1stWorkshop on GenAI Content Detection (GenAIDetect)

This paper details our methods for addressing Task 1 of the GenAI Content Detection shared tasks, which focus on distinguishing AI-generated text from human-written content. The task comprises two subtasks: Subtask A, centered on English-only datasets, and Subtask B, which extends the challenge to multilingual data. Our approach uses a fine-tuned XLM-RoBERTa model for classification, complemented by features including perplexity and TF-IDF. While perplexity is commonly regarded as a useful indicator for identifying machine-generated text, our findings suggest its limitations in multi-model and multilingual contexts. Our approach ranked 6th in Subtask A, but a submission issue left our Subtask B unranked, where it would have placed 23rd.

pdf bib
GenAI Content Detection Task 2: AI vs. Human – Academic Essay Authenticity Challenge
Shammur Absar Chowdhury | Hind Almerekhi | Mucahid Kutlu | Kaan Efe Keleş | Fatema Ahmad | Tasnim Mohiuddin | George Mikros | Firoj Alam
Proceedings of the 1stWorkshop on GenAI Content Detection (GenAIDetect)

This paper presents a comprehensive overview of the first edition of the Academic Essay Authenticity Challenge, organized as part of the GenAI Content Detection shared tasks collocated with COLING 2025. This challenge focuses on detecting machine-generated vs human-authored essays for academic purposes. The task is defined as follows: “Given an essay, identify whether it is generated by a machine or authored by a human.” The challenge involves two languages: English and Arabic. During the evaluation phase, 25 teams submitted systems for English and 21 teams for Arabic, reflecting substantial interest in the task. Finally, five teams submitted system description papers. The majority of submissions utilized fine-tuned transformer-based models, with one team employing Large Language Models (LLMs) such as Llama 2 and Llama 3. This paper outlines the task formulation, details the dataset construction process, and explains the evaluation framework. Additionally, we present a summary of the approaches adopted by participating teams. Nearly all submitted systems outperformed the n-gram-based baseline, with the top-performing systems achieving F1 scores exceeding 0.98 for both languages, indicating significant progress in the detection of machine-generated text.

2020

pdf bib
TrClaim-19: The First Collection for Turkish Check-Worthy Claim Detection with Annotator Rationales
Yavuz Selim Kartal | Mucahid Kutlu
Proceedings of the 24th Conference on Computational Natural Language Learning

Massive misinformation spread over Internet has many negative impacts on our lives. While spreading a claim is easy, investigating its veracity is hard and time consuming, Therefore, we urgently need systems to help human fact-checkers. However, available data resources to develop effective systems are limited and the vast majority of them is for English. In this work, we introduce TrClaim-19, which is the very first labeled dataset for Turkish check-worthy claims. TrClaim-19 consists of labeled 2287 Turkish tweets with annotator rationales, enabling us to better understand the characteristics of check-worthy claims. The rationales we collected suggest that claims’ topics and their possible negative impacts are the main factors affecting their check-worthiness.

2013

pdf bib
A Hybrid Morphological Disambiguation System for Turkish
Mucahid Kutlu | Ilyas Cicekli
Proceedings of the Sixth International Joint Conference on Natural Language Processing