Yuemei Xu
2025
基于动态子空间重构的跨语言词向量对齐及应用
Xiaoyang Gu | Ling Hu | Yuemei Xu
Proceedings of the 24th China National Conference on Computational Linguistics (CCL 2025)
Xiaoyang Gu | Ling Hu | Yuemei Xu
Proceedings of the 24th China National Conference on Computational Linguistics (CCL 2025)
"无监督双语词典归纳(Bilingual Lexicon Induction,BLI)通过学习映射函数对齐两种不同语言的单语词嵌入空间,从而推导单词翻译,在相似语言对中取得显著成功。然而,传统方法依赖单一线性映射,在远距离或低资源语言对上性能欠佳。为解决此问题,本文提出DM-BLI,一个基于动态多子空间对齐的无监督双语词典归纳算法及其应用框架。首先,DM-BLI通过多子空间映射提升对齐精度,重构源语言词嵌入空间,采用无监督聚类识别子空间,结合粗略全局对齐定位目标空间对应子空间,并通过簇内和簇间对比学习优化映射矩阵。在包含5个高资源和5个低资源语言对的有监督和无监督实验中显著提升性能。此外,DM-BLI基于所构建的词典使用logits lens技术评估大语言模型(Large Language Model, LLM)的跨语言能力,通过翻译和重复任务计算余弦相似度,结合词向量空间语义特征验证模型生成翻译的语义合理性。相较传统LLM的跨语言评估方法仅以静态的BLI翻译对为标准,DM-BLI能识别未被词典覆盖但语义合理的翻译,显著提升评估的鲁棒性和语义泛化能力,更准确全面地衡量大语言模型的跨语言语义映射能力。我们的代码发布https://github.com/huling-2/DM-BLI.git."
Linguistic Neuron Overlap Patterns to Facilitate Cross-lingual Transfer on Low-resource Languages
Yuemei Xu | Kexin Xu | Jian Zhou | Ling Hu | Lin Gui
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Yuemei Xu | Kexin Xu | Jian Zhou | Ling Hu | Lin Gui
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
The current Large Language Models (LLMs) face significant challenges in improving their performance on low-resource languagesand urgently need data-efficient methods without costly fine-tuning.From the perspective of language-bridge,we propose a simple yet effective method, namely BridgeX-ICL, to improve the zero-shot Cross-lingual In-Context Learning (X-ICL) for low-resource languages. Unlike existing works focusing on language-specific neurons,BridgeX-ICL explores whether sharingneurons can improve cross-lingual performance in LLMs.We construct neuron probe data from the ground-truth MUSE bilingual dictionaries, and define a subset of language overlap neurons accordingly to ensure full activation of these anchored neurons.Subsequently, we propose an HSIC-based metric to quantify LLMs’ internal linguistic spectrumbased on overlapping neurons, guiding optimal bridge selection.The experiments conducted on 4 cross-lingual tasks and 15 language pairs from 7diverse families, covering both high-low and moderate-low pairs, validate the effectiveness of BridgeX-ICL and offer empirical insights into the underlying multilingual mechanisms of LLMs. The code is publicly available at https://github.com/xuyuemei/BridgeX-ICL.
2024
DM-BLI: Dynamic Multiple Subspaces Alignment for Unsupervised Bilingual Lexicon Induction
Ling Hu | Yuemei Xu
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Ling Hu | Yuemei Xu
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Unsupervised bilingual lexicon induction (BLI) task aims to find word translations between languages and has achieved great success in similar language pairs. However, related works mostly rely on a single linear mapping for language alignment and fail on distant or low-resource language pairs, achieving less than half the performance observed in rich-resource language pairs. In this paper, we introduce DM-BLI, a Dynamic Multiple subspaces alignment framework for unsupervised BLI. DM-BLI improves language alignment by utilizing multiple subspace alignments instead of a single mapping. We begin via unsupervised clustering to discover these subspaces in source embedding space. Then we identify and align corresponding subspaces in the target space using a rough global alignment. DM-BLI further employs intra-cluster and inter-cluster contrastive learning to refine precise alignment for each subspace pair. Experiments conducted on standard BLI datasets for 12 language pairs (6 rich-resource and 6 low-resource) demonstrate substantial gains achieved by our framework. We release our code at https://github.com/huling-2/DM-BLI.git.
2023
Evaluating Factuality in Cross-lingual Summarization
Mingqi Gao | Wenqing Wang | Xiaojun Wan | Yuemei Xu
Findings of the Association for Computational Linguistics: ACL 2023
Mingqi Gao | Wenqing Wang | Xiaojun Wan | Yuemei Xu
Findings of the Association for Computational Linguistics: ACL 2023
Cross-lingual summarization aims to help people efficiently grasp the core idea of the document written in a foreign language. Modern text summarization models generate highly fluent but often factually inconsistent outputs, which has received heightened attention in recent research. However, the factual consistency of cross-lingual summarization has not been investigated yet. In this paper, we propose a cross-lingual factuality dataset by collecting human annotations of reference summaries as well as generated summaries from models at both summary level and sentence level. Furthermore, we perform the fine-grained analysis and observe that over 50% of generated summaries and over 27% of reference summaries contain factual errors with characteristics different from monolingual summarization. Existing evaluation metrics for monolingual summarization require translation to evaluate the factuality of cross-lingual summarization and perform differently at different tasks and levels. Finally, we adapt the monolingual factuality metrics as an initial step towards the automatic evaluation of summarization factuality in cross-lingual settings. Our dataset and code are available at https://github.com/kite99520/Fact_CLS.