2024
pdf
bib
abs
Can LLMs Learn Uncertainty on Their Own? Expressing Uncertainty Effectively in A Self-Training Manner
Shudong Liu
|
Zhaocong Li
|
Xuebo Liu
|
Runzhe Zhan
|
Derek F. Wong
|
Lidia S. Chao
|
Min Zhang
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
Large language models (LLMs) often exhibit excessive, random, and uninformative uncertainty, rendering them unsuitable for decision-making in human-computer interactions. In this paper, we aim to instigate a heightened awareness of self-uncertainty in LLMs, enabling them to express uncertainty more effectively. To accomplish this, we propose an uncertainty-aware instruction tuning (UaIT) method, aligning LLMs’ perception with the probabilistic uncertainty of the generation. We conducted experiments using LLaMA2 and Mistral on multiple free-form QA tasks. Experimental results revealed a surprising 45.2% improvement in the effectiveness of uncertainty expression by LLMs, accompanied by reasonably good out-of-domain generalization capabilities. Moreover, this uncertainty expression can serve as a valuable real-time basis for human decision-making, e.g., retrieving external documents and incorporating stronger LLMs.
pdf
bib
abs
Domain-Aware k-Nearest-Neighbor Knowledge Distillation for Machine Translation
Zhexuan Wang
|
Shudong Liu
|
Xuebo Liu
|
Miao Zhang
|
Derek Wong
|
Min Zhang
Findings of the Association for Computational Linguistics: ACL 2024
kNN-MT has utilized neighborhood knowledge for auxiliary decoding, significantly improving translation performance. Subsequently, kNN-KD transitions the use of neighborhood knowledge from the decoding phase to the training phase, to address the temporal and spatial inefficiencies inherent in kNN-MT. However, kNN-KD transfers all the kNN knowledge arbitrarily, which has the potential to restrict the learning of student models. In this paper, we propose a novel domain-aware kNN-KD method, which filters out domain-relevant neighborhood knowledge for learning in the distillation process. Notably, this entire process exclusively utilizes the neighborhood knowledge of the original model, eliminating the need for establishing any additional datastores. Experiments on four domain translation tasks demonstrate that our method achieves state-of-the-art performance, realizing an average gain of 1.55 COMET and 1.42 BLEU scores, by further enhancing the translation of rare words. Source code can be accessed at https://github.com/wangzx1219/Dk-KD.
2023
pdf
bib
abs
kNN-TL: k-Nearest-Neighbor Transfer Learning for Low-Resource Neural Machine Translation
Shudong Liu
|
Xuebo Liu
|
Derek F. Wong
|
Zhaocong Li
|
Wenxiang Jiao
|
Lidia S. Chao
|
Min Zhang
Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Transfer learning has been shown to be an effective technique for enhancing the performance of low-resource neural machine translation (NMT). This is typically achieved through either fine-tuning a child model with a pre-trained parent model, or by utilizing the out- put of the parent model during the training of the child model. However, these methods do not make use of the parent knowledge during the child inference, which may limit the translation performance. In this paper, we propose a k-Nearest-Neighbor Transfer Learning (kNN-TL) approach for low-resource NMT, which leverages the parent knowledge throughout the entire developing process of the child model. Our approach includes a parent-child representation alignment method, which ensures consistency in the output representations between the two models, and a child-aware datastore construction method that improves inference efficiency by selectively distilling the parent datastore based on relevance to the child model. Experimental results on four low-resource translation tasks show that kNN-TL outperforms strong baselines. Extensive analyses further demonstrate the effectiveness of our approach. Code and scripts are freely available at
https://github.com/NLP2CT/kNN-TL.
pdf
bib
abs
Can LMs Generalize to Future Data? An Empirical Analysis on Text Summarization
Chi Cheang
|
Hou Chan
|
Derek Wong
|
Xuebo Liu
|
Zhaocong Li
|
Yanming Sun
|
Shudong Liu
|
Lidia Chao
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing
Recent pre-trained language models (PLMs) achieve promising results in existing abstractive summarization datasets. However, existing summarization benchmarks overlap in time with the standard pre-training corpora and finetuning datasets. Hence, the strong performance of PLMs may rely on the parametric knowledge that is memorized during pre-training and fine-tuning. Moreover, the knowledge memorized by PLMs may quickly become outdated, which affects the generalization performance of PLMs on future data. In this work, we propose TempoSum, a novel benchmark that contains data samples from 2010 to 2022, to understand the temporal generalization ability of abstractive summarization models. Through extensive human evaluation, we show that parametric knowledge stored in summarization models significantly affects the faithfulness of the generated summaries on future data. Moreover, existing faithfulness enhancement methods cannot reliably improve the faithfulness of summarization models on future data. Finally, we discuss several recommendations to the research community on how to evaluate and improve the temporal generalization capability of text summarization models.
2021
pdf
bib
abs
面向中文口语理解的基于依赖引导的字特征槽填充模型(A Dependency-Guided Character-Based Slot Filling Model for Chinese Spoken Language Understanding)
Zhanbiao Zhu (朱展标)
|
Peijie Huang (黄沛杰)
|
Yexing Zhang (张业兴)
|
Shudong Liu (刘树东)
|
Hualin Zhang (张华林)
|
Junyao Huang (黄均曜)
Proceedings of the 20th Chinese National Conference on Computational Linguistics
意图识别和槽信息填充的联合模型将口语理解技术(Spoken Language Understanding)提升到了一个新的水平,但由于存在出现频率低或未见过的槽指称项(0 shot slot mentions),模型的序列标注性能受限,而且这些联合模型往往没有利用输入序列存在的语法知识信息。已有研究表明序列标注任务可以通过引入依赖树结构,辅助推断序列标注中槽的存在。在中文口语对话理解中,由于中文话语是一串字序列组成,输入话语的字和槽信息是一一对应的,因而槽信息填充模型往往是字特征模型。基于词的依赖树结构无法直接应用于基于字特征的槽填充模型。为了解决字词之间的矛盾,本文提出了一种基于字模型的依赖引导槽填充模型(dependency guided character-based slot filling model,DCSF),提供了一种简洁的方法解决将词级依赖树结构引入中文字特征模型的冲突,同时通过对话语中词汇内部关系进行建模,保留了词级上下文信息和分词信息。在公共基准语料库当SMP-ECDT和CrossWOZ上的实验结果表明,我们的模型优于比较模型,特别是在未见过的槽指称项和低资源情况下有很大的改进。
pdf
bib
abs
结合边界预测和动态模板方法的槽填充模型(Slot Filling Model with Boundary Prediction and Dynamic Template)
Zhanbiao Zhu (朱展标)
|
Peijie Huang (黄沛杰)
|
Yexing Zhang (张业兴)
|
Shudong Liu (刘树东)
|
Hualin Zhang (张华林)
|
Junyao Huang (黄均曜)
Proceedings of the 20th Chinese National Conference on Computational Linguistics
意图识别和槽信息填充的联合模型将口语理解技术(Spoken language understandingSLU)提升到了一个新的水平,但是目前研究进展的模型通过话语上下文信息判断位置信息,缺少对槽信息标签之间位置信息的考虑,导致模型在槽位提取过程中容易发生边界错误,进而影响最终槽位提取表现。而且在槽信息提取任务中,槽指称项(Slot mentions)可能与正常表述话语并没有区别,特别是电影名字、歌曲名字等,模型容易受到槽指称项话语的干扰,因而无法在槽位提取中正确识别槽位边界。本文提出了一种面向口语理解的结合边界预测和动态模板的槽填充(Boundary-predictionand Dynamic-template Slot Filling BDSF)模型。该模型提供了一种联合预测边界信息的辅助任务,将位置信息引入到槽信息填充中,同时利用动态模版机制对话语句式建模,能够让模型聚焦于话语中的非槽指称项部分,避免了模型被槽指称项干扰,增强模型区分槽位边界的能力。在公共基准语料库CAIS和SMP-ECDT上的实验结果表明,我们的模型优于比较模型,特别是能够为槽标签预测模型提供准确的位置信息。