Chunliang Zhang
2025
HEAL: A Hypothesis-Based Preference-Aware Analysis Framework
Yifu Huo | Chenglong Wang | Qiren Zhu | Shunjie Xing | Tong Xiao | Chunliang Zhang | Tongran Liu | JingBo Zhu
Findings of the Association for Computational Linguistics: EMNLP 2025
Yifu Huo | Chenglong Wang | Qiren Zhu | Shunjie Xing | Tong Xiao | Chunliang Zhang | Tongran Liu | JingBo Zhu
Findings of the Association for Computational Linguistics: EMNLP 2025
Preference optimization methods like DPO have achieved remarkable performance in LLM alignment. However, the evaluation for these methods relies on a single response and overlooks other potential outputs, which could also be generated in real-world applications within this hypothetical space. To address this issue, this paper presents a Hypothesis-based PrEference-aware AnaLysis Framework (HEAL), a novel evaluation paradigm that formulates preference alignment as a re-ranking process within hypothesis spaces. The framework incorporates two complementary metrics: ranking accuracy for evaluating ordinal consistency and preference strength correlation for assessing continuous alignment. To facilitate this framework, we develop UniHypoBench, a unified hypothesis benchmark constructed from diverse instruction-response pairs. Through extensive experiments based on HEAL, with a particular focus on the intrinsic mechanisms of preference learning, we demonstrate that current preference learning methods can effectively capture preferences provided by proxy models while simultaneously suppressing negative samples. These findings contribute to preference learning research through two significant avenues. Theoretically, we introduce hypothesis space analysis as an innovative paradigm for understanding preference alignment. Practically, HEAL offers researchers robust diagnostic tools for refining preference optimization methods, while our empirical results identify promising directions for developing more advanced alignment algorithms capable of comprehensive preference capture.
基于关联神经元识别的知识编辑方法
Yuzhang Wu | Yongyu Mu | Chenglong Wang | Qiaozhi He | Tong Xiao | Anxiang Ma | Chunliang Zhang | JingBo Zhu
Proceedings of the 24th China National Conference on Computational Linguistics (CCL 2025)
Yuzhang Wu | Yongyu Mu | Chenglong Wang | Qiaozhi He | Tong Xiao | Anxiang Ma | Chunliang Zhang | JingBo Zhu
Proceedings of the 24th China National Conference on Computational Linguistics (CCL 2025)
"近年来,大语言模型展现出了从训练语料中存储并提取知识的优秀能力,但相应地,其可靠性也容易遭受训练语料中错误信息的破坏,进而产生信息过时、错误回复等问题。基于神经元识别的知识编辑方法通过在模型中识别并微调与目标知识相关的知识神经元,实现对模型内部知识的精确修改。然而,本文研究发现,知识的表达形式会显著影响知识神经元的识别结果,例如,现有神经元识别方法对于同一知识的不同表达形式识别得到的神经元集合平均重叠率只有21.86%。这就导致只对单一的表达形式进行知识编辑无法覆盖到与这个知识相关的所有神经元,所以现有知识编辑方法的鲁棒性往往较差。为了全面且准确地识别到与某一知识相关的所有神经元,本文设计了一种轻量级关联神经元识别器(Light weight Associated Neuron Detector,LAND),通过学习不同表达形式的知识识别出的知识神经元集合之间的差异,从而在知识神经元识别的过程中,自动补全因表达形式差异而未被检出的知识神经元。实验结果表明,LAND方法能够将不同表达形式的文本识别出的知识神经元平均重叠率提升至96%以上,在不同句式的知识编辑成功率上较基线方法多提升了至多10.83个百分点。"
2024
Revisiting Interpolation Augmentation for Speech-to-Text Generation
Chen Xu | Jie Wang | Xiaoqian Liu | Qian Dong | Chunliang Zhang | Tong Xiao | JingBo Zhu | Dapeng Man | Wu Yang
Findings of the Association for Computational Linguistics: ACL 2024
Chen Xu | Jie Wang | Xiaoqian Liu | Qian Dong | Chunliang Zhang | Tong Xiao | JingBo Zhu | Dapeng Man | Wu Yang
Findings of the Association for Computational Linguistics: ACL 2024
Speech-to-text (S2T) generation systems frequently face challenges in low-resource scenarios, primarily due to the lack of extensive labeled datasets. One emerging solution is constructing virtual training samples by interpolating inputs and labels, which has notably enhanced system generalization in other domains. Despite its potential, this technique’s application in S2T tasks has remained under-explored. In this paper, we delve into the utility of interpolation augmentation, guided by several pivotal questions. Our findings reveal that employing an appropriate strategy in interpolation augmentation significantly enhances performance across diverse tasks, architectures, and data scales, offering a promising avenue for more robust S2T systems in resource-constrained settings.
Prior Constraints-based Reward Model Training for Aligning Large Language Models
Hang Zhou | Chenglong Wang | Yimin Hu | Tong Xiao | Chunliang Zhang | Jingbo Zhu
Proceedings of the 23rd Chinese National Conference on Computational Linguistics (Volume 1: Main Conference)
Hang Zhou | Chenglong Wang | Yimin Hu | Tong Xiao | Chunliang Zhang | Jingbo Zhu
Proceedings of the 23rd Chinese National Conference on Computational Linguistics (Volume 1: Main Conference)
“Reinforcement learning with human feedback for aligning large language models (LLMs) trainsa reward model typically using ranking loss with comparison pairs. However, the training pro-cedure suffers from an inherent problem: the uncontrolled scaling of reward scores during rein-forcement learning due to the lack of constraints while training the reward model. This paperproposes a Prior Constraints-based Reward Model (PCRM) training method to mitigate thisproblem. PCRM incorporates prior constraints—specifically, length ratio and cosine similaritybetween outputs of each comparison pair—during reward model training to regulate optimiza-tion magnitude and control score margins. We comprehensively evaluate PCRM by examining itsrank correlation with human preferences and its effectiveness in aligning LLMs via RL. Exper-imental results demonstrate that PCRM significantly improves alignment performance by effec-tively constraining reward score scaling. As another bonus, our method is easily integrated intoarbitrary rank-based alignment methods, such as direct preference optimization, and can yieldconsistent improvement. The code is available at https://github.com/wangclnlp/DeepSpeed-Chat-Extension/tree/PCRM.”
Exploiting Target Language Data for Neural Machine Translation Beyond Back Translation
Abudurexiti Reheman | Yingfeng Luo | Junhao Ruan | Chunliang Zhang | Anxiang Ma | Tong Xiao | JingBo Zhu
Findings of the Association for Computational Linguistics: ACL 2024
Abudurexiti Reheman | Yingfeng Luo | Junhao Ruan | Chunliang Zhang | Anxiang Ma | Tong Xiao | JingBo Zhu
Findings of the Association for Computational Linguistics: ACL 2024
Neural Machine Translation (NMT) encounters challenges when translating in new domains and low-resource languages. To address these issues, researchers have proposed methods to integrate additional knowledge into NMT, such as translation memories (TMs). However, finding TMs that closely match the input sentence remains challenging, particularly in specific domains. On the other hand, monolingual data is widely accessible in most languages, and back-translation is seen as a promising approach for utilizing target language data. Nevertheless, it still necessitates additional training. In this paper, we introduce Pseudo-kNN-MT, a variant of k-nearest neighbor machine translation (kNN-MT) that utilizes target language data by constructing a pseudo datastore. Furthermore, we investigate the utility of large language models (LLMs) for the kNN component. Experimental results demonstrate that our approach exhibits strong domain adaptation capability in both high-resource and low-resource machine translation. Notably, LLMs are found to be beneficial for robust NMT systems.
Revealing the Parallel Multilingual Learning within Large Language Models
Yongyu Mu | Peinan Feng | Zhiquan Cao | Yuzhang Wu | Bei Li | Chenglong Wang | Tong Xiao | Kai Song | Tongran Liu | Chunliang Zhang | JingBo Zhu
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
Yongyu Mu | Peinan Feng | Zhiquan Cao | Yuzhang Wu | Bei Li | Chenglong Wang | Tong Xiao | Kai Song | Tongran Liu | Chunliang Zhang | JingBo Zhu
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
Large language models (LLMs) can handle multilingual and cross-lingual text within a single input; however, previous works leveraging multilingualism in LLMs primarily focus on using English as the pivot language to enhance language understanding and reasoning. Given that multiple languages are a compensation for the losses caused by a single language’s limitations, it’s a natural next step to enrich the model’s learning context through the integration of the original input with its multiple translations. In this paper, we start by revealing that LLMs learn from parallel multilingual input (PMI). Our comprehensive evaluation shows that PMI enhances the model’s comprehension of the input, achieving superior performance than conventional in-context learning (ICL). Furthermore, to explore how multilingual processing affects prediction, we examine the activated neurons in LLMs. Surprisingly, involving more languages in the input activates fewer neurons, leading to more focused and effective neural activation patterns. Also, this neural reaction coincidently mirrors the neuroscience insight about synaptic pruning, highlighting a similarity between artificial and biological ‘brains’.
2023
Augmenting Large Language Model Translators via Translation Memories
Yongyu Mu | Abudurexiti Reheman | Zhiquan Cao | Yuchun Fan | Bei Li | Yinqiao Li | Tong Xiao | Chunliang Zhang | Jingbo Zhu
Findings of the Association for Computational Linguistics: ACL 2023
Yongyu Mu | Abudurexiti Reheman | Zhiquan Cao | Yuchun Fan | Bei Li | Yinqiao Li | Tong Xiao | Chunliang Zhang | Jingbo Zhu
Findings of the Association for Computational Linguistics: ACL 2023
Using translation memories (TMs) as prompts is a promising approach to in-context learning of machine translation models. In this work, we take a step towards prompting large language models (LLMs) with TMs and making them better translators. We find that the ability of LLMs to “understand” prompts is indeed helpful for making better use of TMs. Experiments show that the results of a pre-trained LLM translator can be greatly improved by using high-quality TM-based prompts. These results are even comparable to those of the state-of-the-art NMT systems which have access to large-scale in-domain bilingual data and are well tuned on the downstream tasks.
Rethinking and Improving Multi-task Learning for End-to-end Speech Translation
Yuhao Zhang | Chen Xu | Bei Li | Hao Chen | Tong Xiao | Chunliang Zhang | Jingbo Zhu
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing
Yuhao Zhang | Chen Xu | Bei Li | Hao Chen | Tong Xiao | Chunliang Zhang | Jingbo Zhu
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing
Significant improvements in end-to-end speech translation (ST) have been achieved through the application of multi-task learning. However, the extent to which auxiliary tasks are highly consistent with the ST task, and how much this approach truly helps, have not been thoroughly studied. In this paper, we investigate the consistency between different tasks, considering different times and modules. We find that the textual encoder primarily facilitates cross-modal conversion, but the presence of noise in speech impedes the consistency between text and speech representations. Furthermore, we propose an improved multi-task learning (IMTL) approach for the ST task, which bridges the modal gap by mitigating the difference in length and representation. We conduct experiments on the MuST-C dataset. The results demonstrate that our method attains state-of-the-art results. Moreover, when additional data is used, we achieve the new SOTA result on MuST-C English to Spanish task with 20.8% of the training time required by the current SOTA method.
2019
Improved Differentiable Architecture Search for Language Modeling and Named Entity Recognition
Yufan Jiang | Chi Hu | Tong Xiao | Chunliang Zhang | Jingbo Zhu
Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)
Yufan Jiang | Chi Hu | Tong Xiao | Chunliang Zhang | Jingbo Zhu
Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)
In this paper, we study differentiable neural architecture search (NAS) methods for natural language processing. In particular, we improve differentiable architecture search by removing the softmax-local constraint. Also, we apply differentiable NAS to named entity recognition (NER). It is the first time that differentiable NAS methods are adopted in NLP tasks other than language modeling. On both the PTB language modeling and CoNLL-2003 English NER data, our method outperforms strong baselines. It achieves a new state-of-the-art on the NER task.
2014
A Hybrid Approach to Skeleton-based Translation
Tong Xiao | Jingbo Zhu | Chunliang Zhang
Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)
Tong Xiao | Jingbo Zhu | Chunliang Zhang
Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)
2012
Learning Better Rule Extraction with Translation Span Alignment
Jingbo Zhu | Tong Xiao | Chunliang Zhang
Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)
Jingbo Zhu | Tong Xiao | Chunliang Zhang
Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)
2011
Search
Fix author
Co-authors
- Tong Xiao (肖桐) 11
- Jingbo Zhu 11
- Chenglong Wang 4
- Bei Li 3
- Yongyu Mu 3
- Zhiquan Cao 2
- Eduard Hovy 2
- Tongran Liu 2
- Anxiang Ma 2
- Abudurexiti Reheman 2
- Yuzhang Wu 2
- Chen Xu 2
- Hao Chen 1
- Qian Dong 1
- Yuchun Fan 1
- Peinan Feng 1
- Qiaozhi He 1
- Dirk Hovy 1
- Chi Hu 1
- Yimin Hu 1
- Yifu Huo 1
- Yufan Jiang 1
- Yinqiao Li 1
- Xiaoqian Liu 1
- Yingfeng Luo 1
- Dapeng Man 1
- Donald Metzler 1
- Anselmo Peñas 1
- Junhao Ruan 1
- Kai Song 1
- Jie Wang 1
- Shunjie Xing 1
- Wu Yang 1
- Yuhao Zhang 1
- Hang Zhou 1
- Qiren Zhu 1