SubmissionNumber#=%=#42 FinalPaperTitle#=%=#Evaluating Humanities Theory Alignment in Large Language Models: Incremental Prompting and Statistical Assessment ShortPaperTitle#=%=# NumberOfPages#=%=#15 CopyrightSigned#=%=#Janis Pagel JobTitle#==# Organization#==# Abstract#==#We propose a method to evaluate the extent to which an LLM's observable input–output behavior aligns with established theories in the humanities and cultural studies. We instantiate the framework on three humanities theories—Davidson's truth-conditional semantics, Lewis's truth in fiction, and Iser's concept of textual gaps—using a top-down, theory-driven black-box framework. Core assumptions of these theories are reconstructed into testable behavioral rules and assessed via controlled classification tasks with systematic prompt comparisons and significance testing. Our experiments show that theory-uninformed classification prompts generally outperform theory-enriched prompts in Lewis and Iser settings, while theory-informed prompts help in the Davidson task. Gemini Flash consistently achieves the highest scores across tasks and corpora, while the Iser gap detection task remains substantially harder than binary truth-conditional judgments. Statistical tests confirm robust prompt effects and the failure of basic prompts. However, model behavior under incremental theory exposure is unstable and architecture-dependent. Author{1}{Firstname}#=%=#Axel Author{1}{Lastname}#=%=#Pichler Author{1}{Username}#=%=#axpic Author{1}{Orcid}#=%=#https://orcid.org/0000-0002-9177-7645 Author{1}{Email}#=%=#axel.pichler@univie.ac.at Author{1}{Affiliation}#=%=#University of Vienna Author{2}{Firstname}#=%=#Janis Author{2}{Lastname}#=%=#Pagel Author{2}{Username}#=%=#janispagel Author{2}{Orcid}#=%=#https://orcid.org/0000-0003-4370-1483 Author{2}{Email}#=%=#janis.pagel@uni-koeln.de Author{2}{Affiliation}#=%=#Department of Digital Humanities, University of Cologne ========== èéáğö