Xin Zhao

Papers on this page may belong to the following people: Wayne Xin Zhao, Xin Zhao (Chinese Academy of Sciences)


2026

As large language models (LLMs) are increasingly deployed as multilingual services, keeping their factual knowledge accurate across languages has become both essential and challenging. However, most of the existing knowledge editing (KE) methods are static, in that they update parameters offline for given accumulated edits of knowledge, and are struggling to effectively propagate edits in one language to others, while avoiding side effects. To mitigate this issue, we propose **CLICKER**, a KE method with stepwise reasoning that dynamically retrieves only knowledge relevant to a given query and then edit, while maintaining cross-lingual consistency through: (1) relevance-aware knowledge retrieval, (2) on-demand in-context KE, and (3) language alignment of the outputs. To rigorously evaluate the locality of edits in cross-lingual KE, we develop **Multi-CounterFact** dataset that contain many semantically-similar but irrelevant prompts for the edit. Experiments on Multi-CounterFact and MzsRE with both open- and closed-source LLMs confirmed that CLICKER effectively localizes edits and resolves cross-lingual inconsistencies, outperforming dynamic KE baselines.
Multilingual domain adaptation (ML-DA) enables large language models (LLMs) to acquire domain knowledge across languages. Despite many methods, how domain knowledge is acquired within a language and transferred across languages remains, leading to suboptimal performance, particularly in low-resource settings.This work examines the learning dynamics of LLMs during ML-DA. Because prior ML-DA studies often train and evaluate on datasets with mismatched knowledge coverage, we propose AdaXEval, an adaptive evaluation method that constructs multiple-choice QA datasets from the same bilingual domain corpus used for training, thereby enabling direct analysis of multilingual knowledge acquisition.Through continual training of LLMs with diverse data recipes, we track how LLMs acquire domain facts and pinpoint the loss shielding mechanism behind the knowledge memorization and generalization in domain adaptation. Our experiments on multilingual LLMs reveal that cross-lingual transfer remains challenging.The code is released.