Xiong Zhou
2024
Socratic Human Feedback (SoHF): Expert Steering Strategies for LLM Code Generation
Subramanian Chidambaram
|
Li Li
|
Min Bai
|
Xiaopeng Li
|
Kaixiang Lin
|
Xiong Zhou
|
Alex Williams
Findings of the Association for Computational Linguistics: EMNLP 2024
Large Language Models (LLMs) are increasingly used for generating code solutions, empowered by features like self-debugging and self-reflection. However, LLMs often struggle with complex programming problems without human guidance. This paper investigates the strategies employed by expert programmers to steer code-generating LLMs toward successful outcomes. Through a study involving experts using natural language to guide GPT-4, Gemini Ultra, and, Claude 3.5 Sonnet on highly difficult programming challenges, we frame our analysis using the “Socratic Feedback” paradigm for understanding effective steering strategies. By analyzing 30 conversational transcripts across all three models, we map observed feedback strategies to five stages of Socratic Questioning: Definition, Elenhus, Maieutic, Dialectic, and Counter-factual reasoning. We find evidence that by employing a combination of different Socratic feedback strategies across multiple turns, programmers successfully guided the models to solve 74% of the problems that the models initially failed to solve on their own.
Search
Co-authors
- Subramanian Chidambaram 1
- Li Li 1
- Min Bai 1
- Xiaopeng Li 1
- Kaixiang Lin 1
- show all...