Sha Jiu
2025
基于个性化记忆策略的小参数语言模型高效对齐方法
Mengxiao Zhu | Peilin Tang | Sha Jiu | Chong Feng | Lama jIe | Yandanzhicao Yandanzhicao
Proceedings of the 24th China National Conference on Computational Linguistics (CCL 2025)
Mengxiao Zhu | Peilin Tang | Sha Jiu | Chong Feng | Lama jIe | Yandanzhicao Yandanzhicao
Proceedings of the 24th China National Conference on Computational Linguistics (CCL 2025)
"在信息爆炸的时代背景下,大模型每天都需处理庞大的知识与数据量。面对缺乏大规模工业级训练设施的现实,小参数模型成为了一种必要选择。然而,这些模型的信息处理需求远远超出其自然存储能力,这引发了一个核心问题:小参数模型应该记住什么,又应该忘记什么?传统的全记忆学习方法由于模型参数容量有限而不再高效,尝试记住一切不仅效率低,还可能引起过重的认知负担,降低思考质量。本文旨在重新定义有限记忆资源下的大语言模型记忆策略。本文首先将模型的记忆划分为内部记忆与外部记忆两个维度,并系统探讨了哪些知识应被优先内化为内部记忆。基于此,我们提出一种个性化记忆策略,针对不同类型的内部知识构建对应的对齐机制,使模型记忆更符合人类偏好与推理需求。这一策略不仅显著增强了小参数模型的理解能力与深度推理能力,也从根本上挑战了坜记得越多越好圢的传统假设,展示了战略性记忆选择在提升学习效率方面的巨大潜力。此外,本文还构建了关于内部记忆的训练集和评测数据集,并在仅使用3B参数规模的模型上进行了系统实验。实验结果显示,本文方法在该评测数据上实现了最佳效果,甚至在多个指标上超越了闭源模型及参数规模达70B的大型模型。为推动行业发展,我们已开源整个训练策略、模型权重及对应的评测数据集和评测方法。"
TVQACML: Benchmarking Text-Centric Visual Question Answering in Multilingual Chinese Minority Languages
Sha Jiu | Yu Weng | Mengxiao Zhu | Chong Feng | Zheng Liu | Jialedongzhu
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Sha Jiu | Yu Weng | Mengxiao Zhu | Chong Feng | Zheng Liu | Jialedongzhu
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Text-Centric Visual Question Answering (TEC-VQA) is a critical research area that requires semantic interactions between objects and scene texts. However, most existing TEC-VQA benchmarks focus on high-resource languages like English and Chinese. Although few works expanding multilingual QA pairs in non-text-centric VQA datasets through translation, which encounters a substantial “visual-textual misalignment” problem when applied to TEC-VQA. Moreover, the open-source nature of these benchmarks and the broad sources of training data for MLLMs have inevitably led to benchmark contamination, resulting in unreliable evaluation results. To alleviate this issue, we propose a contamination-free and more challenging TEC-VQA benchmark called Text-Centric Visual Question Answering in Multilingual Chinese Minority Languages(TVQACML), which involves eight languages, including Standard Chinese, Korean, and six minority languages. TVQACML supports a wide range of tasks, such as Text Recognition, Scene Text-Centric VQA, Document-Oriented VQA, Key Information Extraction (KIE), and Handwritten Mathematical Expression Recognition (HMER), featuring 32,000 question-answer pairs across 8,000 images. Extensive experiments on TVQACML across multiple MLLMs demonstrate the effectiveness of evaluating the MLLMs and enhancing multilingual TEC-VQA performance with fine-tuning.