Tanushree Ravindra Pratap Yadav


2026

As Large Language Models (LLMs) approachhuman-level reasoning in English, their performance in low-resource, code-mixed languagesremains surprisingly brittle. We identify Competence Collapse, a distinct pathology wheremodels capable of complex reasoning in English exhibit severe utility degradation whenprompted in Hinglish (Hindi-English). Wequantify this as a Service Gap, observing astatistically significant decline in instructionalquality (∆D ≈ −11.3%, p < 0.001) across9 diverse architectures. Spectral analysis suggests that this stems from a representationaldivergence between the model’s High-UtilityDirection and its Generation Subspace. Tobridge this gap, we propose Cross-LingualActivation Steering (CLAS), an inferencetime intervention that injects a "CompetenceGap Vector" into the residual stream. Evaluated across 6 open-weight models (using alightweight calibration set, N = 50), CLASrecovered utility by ∆D = +2.22 (d = 0.60)while preserving code-mixed fidelity (CMI ≈0.4) and reinforcing safety protocols.
Search
Co-authors
    Venues
    Fix author