2024
pdf
bib
abs
PTD-SQL: Partitioning and Targeted Drilling with LLMs in Text-to-SQL
Ruilin Luo
|
Liyuan Wang
|
Binghuai Lin
|
Zicheng Lin
|
Yujiu Yang
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
Large Language Models (LLMs) have emerged as powerful tools for Text-to-SQL tasks, exhibiting remarkable reasoning capabilities. Different from tasks such as math word problem and commonsense reasoning, SQL solutions have a relatively fixed pattern. This facilitates the investigation of whether LLMs can benefit from categorical thinking, mirroring how humans acquire knowledge through inductive reasoning based on comparable examples. In this study, we propose that employing query group partitioning allows LLMs to focus on learning the thought processes specific to a single problem type, consequently enhancing their reasoning abilities across diverse difficulty levels and problem categories. Our experiments reveal that multiple advanced LLMs, when equipped with PTD-SQL, can either surpass or match previous state-of-the-art (SOTA) methods on the Spider and BIRD datasets. Intriguingly, models with varying initial performances have exhibited significant improvements mainly at the boundary of their capabilities after targeted drilling, suggesting a parallel with human progress. Code is available at https://github.com/lrlbbzl/PTD-SQL.
pdf
bib
abs
DialogVCS: Robust Natural Language Understanding in Dialogue System Upgrade
Zefan Cai
|
Xin Zheng
|
Tianyu Liu
|
Haoran Meng
|
Jiaqi Han
|
Gang Yuan
|
Binghuai Lin
|
Baobao Chang
|
Yunbo Cao
Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)
In the constant updates of the product dialogue systems, we need to retrain the natural language understanding (NLU) model as new data from the real users would be merged into the existing data accumulated in the last updates. Within the newly added data, new intents would emerge and might have semantic entanglement with the existing intents, e.g. new intents that are semantically too specific or generic are actually a subset or superset of some existing intents in the semantic space, thus impairing the robustness of the NLU model.As the first attempt to solve this problem, we setup a new benchmark consisting of 4 Dialogue Version Control dataSets (DialogVCS). We formulate the intent detection with imperfect data in the system update as a multi-label classification task with positive but unlabeled intents, which asks the models to recognize all the proper intents, including the ones with semantic entanglement, in the inference.We also propose comprehensive baseline models and conduct in-depth analyses for the benchmark, showing that the semantically entangled intents can be effectively recognized with an automatic workflow. Our code and dataset are available at
https://github.com/Zefan-Cai/DialogVCS.
pdf
bib
abs
Large Language Models are not Fair Evaluators
Peiyi Wang
|
Lei Li
|
Liang Chen
|
Zefan Cai
|
Dawei Zhu
|
Binghuai Lin
|
Yunbo Cao
|
Lingpeng Kong
|
Qi Liu
|
Tianyu Liu
|
Zhifang Sui
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
In this paper, we uncover a positional bias in the evaluation paradigm of adopting large language models (LLMs), e.g., GPT-4, as a referee to score and compare the quality of responses generated by candidate models. We find that the quality ranking of candidate responses can be easily hacked by simply altering their order of appearance in the context. This manipulation allows us to skew the evaluation result, making one model appear considerably superior to the other, e.g., Vicuna-13B could beat ChatGPT on 66 over 80 tested queries with ChatGPT as an evaluator. We propose a simple yet effective calibration framework to address our discovered positional bias.To evaluate the effectiveness of our framework, we manually annotate the “win/tie/lose” outcomes of responses from ChatGPT and Vicuna-13B in the Vicuna Benchmark’s question prompt. Extensive experiments demonstrate that our approach successfully alleviates evaluation bias, resulting in closer alignment with human judgments.
2023
pdf
bib
abs
Denoising Bottleneck with Mutual Information Maximization for Video Multimodal Fusion
Shaoxiang Wu
|
Damai Dai
|
Ziwei Qin
|
Tianyu Liu
|
Binghuai Lin
|
Yunbo Cao
|
Zhifang Sui
Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Video multimodal fusion aims to integrate multimodal signals in videos, such as visual, audio and text, to make a complementary prediction with multiple modalities contents. However, unlike other image-text multimodal tasks, video has longer multimodal sequences with more redundancy and noise in both visual and audio modalities. Prior denoising methods like forget gate are coarse in the granularity of noise filtering. They often suppress the redundant and noisy information at the risk of losing critical information. Therefore, we propose a denoising bottleneck fusion (DBF) model for fine-grained video multimodal fusion. On the one hand, we employ a bottleneck mechanism to filter out noise and redundancy with a restrained receptive field. On the other hand, we use a mutual information maximization module to regulate the filter-out module to preserve key information within different modalities. Our DBF model achieves significant improvement over current state-of-the-art baselines on multiple benchmarks covering multimodal sentiment analysis and multimodal summarization tasks. It proves that our model can effectively capture salient features from noisy and redundant video, audio, and text inputs. The code for this paper will be publicly available at
https://github.com/WSXRHFG/DBFpdf
bib
abs
Soft Language Clustering for Multilingual Model Pre-training
Jiali Zeng
|
Yufan Jiang
|
Yongjing Yin
|
Yi Jing
|
Fandong Meng
|
Binghuai Lin
|
Yunbo Cao
|
Jie Zhou
Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Multilingual pre-trained language models have demonstrated impressive (zero-shot) cross-lingual transfer abilities, however, their performance is hindered when the target language has distant typologyfrom the source language or when pre-training data is limited in size. In this paper, we propose XLM-P, a method that contextually retrieves prompts as flexible guidance for encoding instances conditionally. Our space-efficient and model-agnostic XLM-P approach enables (1) lightweight modeling of language-invariant and language-specific knowledge across languages, and (2) easy integration with other multilingual pre-training methods. On the tasks of XTREME, which include text classification, sequence labeling, question answering, and sentence retrieval, both base- and large-size language models pre-trained with our proposed method exhibit consistent performance improvement. Furthermore, it provides substantial advantages for low-resource languages in unsupervised sentence retrieval and for target languages that differ greatly from the source language in cross-lingual transfer.
pdf
bib
abs
Enhancing Continual Relation Extraction via Classifier Decomposition
Heming Xia
|
Peiyi Wang
|
Tianyu Liu
|
Binghuai Lin
|
Yunbo Cao
|
Zhifang Sui
Findings of the Association for Computational Linguistics: ACL 2023
Continual relation extraction (CRE) models aim at handling emerging new relations while avoiding catastrophically forgetting old ones in the streaming data. Though improvements have been shown by previous CRE studies, most of them only adopt a vanilla strategy when models first learn representations of new relations. In this work, we point out that there exist two typical biases after training of this vanilla strategy: classifier bias and representation bias, which causes the previous knowledge that the model learned to be shaded. To alleviate those biases, we propose a simple yet effective classifier decomposition framework that splits the last FFN layer into separated previous and current classifiers, so as to maintain previous knowledge and encourage the model to learn more robust representations at this training stage. Experimental results on two standard benchmarks show that our proposed framework consistently outperforms the state-of-the-art CRE models, which indicates that the importance of the first training stage to CRE models may be underestimated. Our code will be released upon acceptance.
pdf
bib
abs
Bi-Drop: Enhancing Fine-tuning Generalization via Synchronous sub-net Estimation and Optimization
Shoujie Tong
|
Heming Xia
|
Damai Dai
|
Runxin Xu
|
Tianyu Liu
|
Binghuai Lin
|
Yunbo Cao
|
Zhifang Sui
Findings of the Association for Computational Linguistics: EMNLP 2023
Pretrained language models have achieved remarkable success in natural language understanding. However, fine-tuning pretrained models on limited training data tends to overfit and thus diminish performance. This paper presents Bi-Drop, a fine-tuning strategy that selectively updates model parameters using gradients from various sub-nets dynamically generated by dropout. The sub-net estimation of Bi-Drop is performed in an in-batch manner, so it overcomes the problem of hysteresis in sub-net updating, which is possessed by previous methods that perform asynchronous sub-net estimation. Also, Bi-Drop needs only one mini-batch to estimate the sub-net so it achieves higher utility of training data. Experiments on the GLUE benchmark demonstrate that Bi-Drop consistently outperforms previous fine-tuning methods. Furthermore, empirical results also show that Bi-Drop exhibits excellent generalization ability and robustness for domain transfer, data imbalance, and low-resource scenarios.
pdf
bib
abs
DialogQAE: N-to-N Question Answer Pair Extraction from Customer Service Chatlog
Xin Zheng
|
Tianyu Liu
|
Haoran Meng
|
Xu Wang
|
Yufan Jiang
|
Mengliang Rao
|
Binghuai Lin
|
Yunbo Cao
|
Zhifang Sui
Findings of the Association for Computational Linguistics: EMNLP 2023
Harvesting question-answer (QA) pairs from customer service chatlog in the wild is an efficient way to enrich the knowledge base for customer service chatbots in the cold start or continuous integration scenarios. Prior work attempts to obtain 1-to-1 QA pairs from growing customer service chatlog, which fails to integrate the incomplete utterances from the dialog context for composite QA retrieval. In this paper, we propose N-to-N QA extraction task in which the derived questions and corresponding answers might be separated across different utterances. We introduce a suite of generative/discriminative tagging based methods with end-to-end and two-stage variants that perform well on 5 customer service datasets and for the first time setup a benchmark for N-to-N DialogQAE with utterance and session level evaluation metrics. With a deep dive into extracted QA pairs, we find that the relations between and inside the QA pairs can be indicators to analyze the dialogue structure, e.g. information seeking, clarification, barge-in and elaboration. We also show that the proposed models can adapt to different domains and languages, and reduce the labor cost of knowledge accumulation in the real-world product dialogue platform.
2022
pdf
bib
abs
HPT: Hierarchy-aware Prompt Tuning for Hierarchical Text Classification
Zihan Wang
|
Peiyi Wang
|
Tianyu Liu
|
Binghuai Lin
|
Yunbo Cao
|
Zhifang Sui
|
Houfeng Wang
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
Hierarchical text classification (HTC) is a challenging subtask of multi-label classification due to its complex label hierarchy.Recently, the pretrained language models (PLM)have been widely adopted in HTC through a fine-tuning paradigm. However, in this paradigm, there exists a huge gap between the classification tasks with sophisticated label hierarchy and the masked language model (MLM) pretraining tasks of PLMs and thus the potential of PLMs cannot be fully tapped.To bridge the gap, in this paper, we propose HPT, a Hierarchy-aware Prompt Tuning method to handle HTC from a multi-label MLM perspective.Specifically, we construct a dynamic virtual template and label words that take the form of soft prompts to fuse the label hierarchy knowledge and introduce a zero-bounded multi-label cross-entropy loss to harmonize the objectives of HTC and MLM.Extensive experiments show HPT achieves state-of-the-art performances on 3 popular HTC datasets and is adept at handling the imbalance and low resource situations. Our code is available at https://github.com/wzh9969/HPT.
pdf
bib
abs
Learning Robust Representations for Continual Relation Extraction via Adversarial Class Augmentation
Peiyi Wang
|
Yifan Song
|
Tianyu Liu
|
Binghuai Lin
|
Yunbo Cao
|
Sujian Li
|
Zhifang Sui
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
Continual relation extraction (CRE) aims to continually learn new relations from a class-incremental data stream. CRE model usually suffers from catastrophic forgetting problem, i.e., the performance of old relations seriously degrades when the model learns new relations. Most previous work attributes catastrophic forgetting to the corruption of the learned representations as new relations come, with an implicit assumption that the CRE models have adequately learned the old relations. In this paper, through empirical studies we argue that this assumption may not hold, and an important reason for catastrophic forgetting is that the learned representations do not have good robustness against the appearance of analogous relations in the subsequent learning process. To address this issue, we encourage the model to learn more precise and robust representations through a simple yet effective adversarial class augmentation mechanism (ACA), which is easy to implement and model-agnostic.Experimental results show that ACA can consistently improve the performance of state-of-the-art CRE models on two popular benchmarks.
pdf
bib
abs
DualNER: A Dual-Teaching framework for Zero-shot Cross-lingual Named Entity Recognition
Jiali Zeng
|
Yufan Jiang
|
Yongjing Yin
|
Xu Wang
|
Binghuai Lin
|
Yunbo Cao
Findings of the Association for Computational Linguistics: EMNLP 2022
We present DualNER, a simple and effective framework to make full use of both annotated source language corpus and unlabeled target language text for zero-shot cross-lingual named entity recognition (NER). In particular, we combine two complementary learning paradigms of NER, i.e., sequence labeling and span prediction, into a unified multi-task framework. After obtaining a sufficient NER model trained on the source data, we further train it on the target data in a dual-teaching manner, in which the pseudo-labels for one task are constructed from the prediction of the other task. Moreover, based on the span prediction, an entity-aware regularization is proposed to enhance the intrinsic cross-lingual alignment between the same entities in different languages. Experiments and analysis demonstrate the effectiveness of our DualNER.
pdf
bib
abs
DialogUSR: Complex Dialogue Utterance Splitting and Reformulation for Multiple Intent Detection
Haoran Meng
|
Zheng Xin
|
Tianyu Liu
|
Zizhen Wang
|
He Feng
|
Binghuai Lin
|
Xuemin Zhao
|
Yunbo Cao
|
Zhifang Sui
Findings of the Association for Computational Linguistics: EMNLP 2022
While interacting with chatbots, users may elicit multiple intents in a single dialogue utterance. Instead of training a dedicated multi-intent detection model, we propose DialogUSR, a dialogue utterance splitting and reformulation task that first splits multi-intent user query into several single-intent sub-queries and then recovers all the coreferred and omitted information in the sub-queries. DialogUSR can serve as a plug-in and domain-agnostic module that empowers the multi-intent detection for the deployed chatbots with minimal efforts. We collect a high-quality naturally occurring dataset that covers 23 domains with a multi-step crowd-souring procedure. To benchmark the proposed dataset, we propose multiple action-based generative models that involve end-to-end and two-stage training, and conduct in-depth analyses on the pros and cons of the proposed baselines.
pdf
bib
abs
基于GPT-2和互信息的语言单位信息量对韵律特征的影响(Prosodic Effects of Speech Unit’s Information Based on GPT-2 and Mutual Information)
Yun Hao (郝韵)
|
Yanlu Xie (解焱陆)
|
Binghuai Lin (林炳怀)
|
Jinsong Zhang (张劲松)
Proceedings of the 21st Chinese National Conference on Computational Linguistics
“基于信息论的言语产出研究发现携带信息量越大的语言单位,其语音信号越容易被强化。目前的相关研究主要通过自信息的方式衡量语言单位信息量,但该方法难以对长距离的上下文语境进行建模。本研究引入基于预训练语言模型GPT-2和文本-拼音互信息的语言单位信息量衡量方式,考察汉语的单词、韵母和声调信息量对语音产出的韵律特征的影响。研究结果显示汉语中单词和韵母信息量更大时,其韵律特征倾向于被增强,证明了我们提出的方法是有效的。其中信息量效应在音长特征上相比音高和音强特征更显著。”
pdf
bib
abs
基于熵的二语语音习得评价研究—以日本学习者习得汉语声母为例(An Entropy-based Evaluation of L2 Speech Acquisition: The Preliminary Report on Chinese Initials Produced by Japanese Learners)
Xiaoli Feng (冯晓莉)
|
Yingming Gao (高迎明)
|
Binghuai Lin (林炳怀)
|
Jinson Zhang (张劲松)
Proceedings of the 21st Chinese National Conference on Computational Linguistics
“本文引入“熵”对学习者二语音素发音错误的分布情况进行了量化研究。通过对不同音素及不同二语水平学习者音素错误率和错误分散度的分析发现:1.错误率与错误分散度有较高的相关性,二者的差异反映出错误分布的差异性;2.错误率类似的音素中,与母语音素相似度越高的音素错误分散度越小;3.较初级水平,中级水平学习者音素错误率下降而错误分散度上升。由此可见,熵可以在错误率基础上可以进一步揭示学习者母语音系及二语水平对音素发音错误分散度的影响。”