Conrad Borchers


2025

pdf bib
Beyond Agreement: Rethinking Ground Truth in Educational AI Annotation
Danielle R Thomas | Conrad Borchers | Ken Koedinger
Proceedings of the Artificial Intelligence in Measurement and Education Conference (AIME-Con): Full Papers

Humans are biased, inconsistent, and yet we keep trusting them to define “ground truth.” This paper questions the overreliance on inter-rater reliability in educational AI and proposes a multidimensional approach leveraging expert-based approaches and close-the-loop validity to build annotations that reflect impact, not just agreement. It’s time we do better.

2022

pdf bib
Looking for a Handsome Carpenter! Debiasing GPT-3 Job Advertisements
Conrad Borchers | Dalia Gala | Benjamin Gilburt | Eduard Oravkin | Wilfried Bounsi | Yuki M Asano | Hannah Kirk
Proceedings of the 4th Workshop on Gender Bias in Natural Language Processing (GeBNLP)

The growing capability and availability of generative language models has enabled a wide range of new downstream tasks. Academic research has identified, quantified and mitigated biases present in language models but is rarely tailored to downstream tasks where wider impact on individuals and society can be felt. In this work, we leverage one popular generative language model, GPT-3, with the goal of writing unbiased and realistic job advertisements. We first assess the bias and realism of zero-shot generated advertisements and compare them to real-world advertisements. We then evaluate prompt-engineering and fine-tuning as debiasing methods. We find that prompt-engineering with diversity-encouraging prompts gives no significant improvement to bias, nor realism. Conversely, fine-tuning, especially on unbiased real advertisements, can improve realism and reduce bias.