Maryam Mohammadi


2026

Large language models (LLMs) are increasingly used for communication in many languages, therefore, understanding their limitations with respect to culture-specific pragmatics is important. While LLMs perform well on statistically frequent structures, their shortcomings are most evident in rare pragmatic phenomena. This study investigates whether LLMs can generate a (rare) complex honorific mismatch in Farsi. The pattern arises at two levels:(i) a plural pronoun disagrees with a singular referent for the sake of honorification, and (ii) the related components violate the Polite Plural Generalization due to intimacy implication. This double mismatch pattern is attested in everyday speech, though it is statistically sparse. We tested GPT-4 across multiple scenarios. The results reveal that the model successfully employs the first mismatch to indicate honorific, but fails to adopt the second mismatch that simultaneously conveys intimacy. The model thus deviates from humanlike behavior at the syntax–pragmatics interface. These findings suggest that, while machine models demonstrate partial success in generating honorifics, they rely primarily on statistical patterns and lack the deeper pragmatic understanding necessary for contextual competence.

2023

This paper investigates Farsi particle ‘mage’ in interrogatives, including both polar and constituent/Wh questions. I will show that ‘mage’ requires both contextual evidence and speaker’s prior belief in the sense that they contradict each other. While in polar questions (PQs) both types of bias can be straightforwardly expressed through the uttered proposition (cf. Mameni 2010), Wh-questions (WhQs) do not provide such a propositional object. To capture this difference, I propose Answerhood as the relevant notation that provides the necessary object source for ‘mage’ (inspired by Theiler 2021). The proposal establishes the felicity conditions and the meaning of ‘mage’ in relation to the (contextually) restricted answerhood in both polar and constituent questions.