2024
pdf
bib
abs
FRoG: Evaluating Fuzzy Reasoning of Generalized Quantifiers in LLMs
Yiyuan Li
|
Shichao Sun
|
Pengfei Liu
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
Fuzzy reasoning is vital due to the frequent use of imprecise information in daily contexts. However, the ability of current large language models (LLMs) to handle such reasoning remains largely uncharted. In this paper, we introduce a new benchmark, FRoG, for fuzzy reasoning, featuring real-world mathematical word problems that incorporate generalized quantifiers. Our experimental findings reveal that fuzzy reasoning continues to pose significant challenges for LLMs. Moreover, we find that existing methods designed to enhance reasoning do not consistently improve performance in tasks involving fuzzy logic. Additionally, our results show an inverse scaling effect in the performance of LLMs on FRoG. Interestingly, we also demonstrate that strong mathematical reasoning skills are not necessarily indicative of success on our benchmark.
pdf
bib
abs
OpenResearcher: Unleashing AI for Accelerated Scientific Research
Yuxiang Zheng
|
Shichao Sun
|
Lin Qiu
|
Dongyu Ru
|
Cheng Jiayang
|
Xuefeng Li
|
Jifan Lin
|
Binjie Wang
|
Yun Luo
|
Renjie Pan
|
Yang Xu
|
Qingkai Min
|
Zizhao Zhang
|
Yiwen Wang
|
Wenjie Li
|
Pengfei Liu
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing: System Demonstrations
The rapid growth of scientific literature imposes significant challenges for researchers endeavoring to stay updated with the latest advancements in their fields and delve into new areas. We introduce OpenResearcher, an innovative platform that leverages Artificial Intelligence (AI) techniques to accelerate the research process by answering diverse questions from researchers. OpenResearcher is built based on Retrieval-Augmented Generation (RAG) to integrate Large Language Models (LLMs) with up-to-date, domain-specific knowledge. Moreover, we develop various tools for OpenResearcher to understand researchers’ queries, search from the scientific literature, filter retrieved information, provide accurate and comprehensive answers, and self-refine these answers. OpenResearcher can flexibly use these tools to balance efficiency and effectiveness. As a result, OpenResearcher enables researchers to save time and increase their potential to discover new insights and drive scientific breakthroughs. Demo, video, and code are available at: https://github.com/GAIR-NLP/OpenResearcher.
pdf
bib
abs
Contrastive Preference Learning for Neural Machine Translation
Jianfei He
|
Shichao Sun
|
Sen Peng
|
Jie Xu
|
Xiaohua Jia
|
Wenjie Li
Findings of the Association for Computational Linguistics: NAACL 2024
There exists a discrepancy between the token-level objective during training and the overall sequence-level quality that is expected from the model. This discrepancy leads to issues like exposure bias.To align the model with human expectations, sequence-level objectives are often used to fine-tune pre-trained models.In this paper, we introduce a contrastive preference model that enhances the traditional Plackett-Luce model by incorporating an indicator function. Building upon this novel preference model, we propose Contrastive Preference Learning (CPL), which uses offline samples with list-wise preferences to fine-tune a pre-trained model in Neural Machine Translation. Our experiments, conducted on three language pairs, demonstrate that CPL outperforms not only the vanilla Transformer model but also other token-level and sequence-level baselines. Furthermore, the ablation study highlights the essential role of the proposed indicator function in achieving this improvement.
pdf
bib
Prompt Chaining or Stepwise Prompt? Refinement in Text Summarization
Shichao Sun
|
Ruifeng Yuan
|
Ziqiang Cao
|
Wenjie Li
|
Pengfei Liu
Findings of the Association for Computational Linguistics: ACL 2024
pdf
bib
The Critique of Critique
Shichao Sun
|
Junlong Li
|
Weizhe Yuan
|
Ruifeng Yuan
|
Wenjie Li
|
Pengfei Liu
Findings of the Association for Computational Linguistics: ACL 2024
pdf
bib
abs
Recovery Should Never Deviate from Ground Truth: Mitigating Exposure Bias in Neural Machine Translation
Jianfei He
|
Shichao Sun
|
Xiaohua Jia
|
Wenjie Li
Proceedings of the 25th Annual Conference of the European Association for Machine Translation (Volume 1)
In Neural Machine Translation, models are often trained with teacher forcing and suffer from exposure bias due to the discrepancy between training and inference. Current token-level solutions, such as scheduled sampling, aim to maximize the model’s capability to recover from errors. Their loss functions have a side effect: a sequence with errors may have a larger probability than the ground truth. The consequence is that the generated sequences may recover too much and deviate from the ground truth. This side effect is verified in our experiments. To address this issue, we propose using token-level contrastive learning to coordinate three training objectives: the usual MLE objective, an objective for recovery from errors, and a new objective to explicitly constrain the recovery in a scope that does not impact the ground truth. Our empirical analysis shows that this method effectively achieves these objectives in training and reduces the frequency with which the third objective is violated. We conduct experiments on three language pairs: German-English, Russian-English, and English-Russian. Results show that our method outperforms the vanilla Transformer and other methods addressing the exposure bias.
pdf
bib
abs
Dissecting Human and LLM Preferences
Junlong Li
|
Fan Zhou
|
Shichao Sun
|
Yikai Zhang
|
Hai Zhao
|
Pengfei Liu
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
As a relative quality comparison of model responses, human and Large Language Model (LLM) preferences serve as common alignment goals in model fine-tuning and criteria in evaluation. Yet, these preferences merely reflect broad tendencies, resulting in less explainable and controllable models with potential safety risks. In this work, we dissect the preferences of human and 32 different LLMs to understand their quantitative composition, using annotations from real-world user-model conversations for a fine-grained, scenario-wise analysis. We find that humans are less sensitive to errors, favor responses that support their stances, and show clear dislike when models admit their limits. On the contrary, advanced LLMs like GPT-4-Turbo emphasize correctness, clarity, and harmlessness more. Additionally, LLMs of similar sizes tend to exhibit similar preferences, regardless of their training methods, and fine-tuning for alignment does not significantly alter the preferences of pretrained-only LLMs. Finally, we show that preference-based evaluation can be intentionally manipulated. In both training-free and training-based settings, aligning a model with the preferences of judges boosts scores, while injecting the least preferred properties lowers them. This results in notable score shifts: up to 0.59 on MT-Bench (1-10 scale) and 31.94 on AlpacaEval 2.0 (0-100 scale), highlighting the significant impact of this strategic adaptation. We have made all resources of this project publicly available.
2023
pdf
bib
abs
Separating Context and Pattern: Learning Disentangled Sentence Representations for Low-Resource Extractive Summarization
Ruifeng Yuan
|
Shichao Sun
|
Zili Wang
|
Ziqiang Cao
|
Wenjie Li
Findings of the Association for Computational Linguistics: ACL 2023
Extractive summarization aims to select a set of salient sentences from the source document to form a summary. Context information has been considered one of the key factors for this task. Meanwhile, there also exist other pattern factors that can identify sentence importance, such as sentence position or certain n-gram tokens. However, such pattern information is only effective in specific datasets or domains and can not be generalized like the context information when there only exists limited data. In this case, current extractive summarization models may suffer from a performance drop when transferring to a new dataset. In this paper, we attempt to apply disentangled representation learning on extractive summarization, and separate the two key factors for the task, context and pattern, for a better generalization ability in the low-resource setting. To achieve this, we propose two groups of losses for encoding and disentangling sentence representations into context representations and pattern representations. In this case, we can either use only the context information in the zero-shot setting or fine-tune the pattern information in the few-shot setting. Experimental results on three summarization datasets from different domains show the effectiveness of our proposed approach.
pdf
bib
abs
Data Selection Curriculum for Abstractive Text Summarization
Shichao Sun
|
Ruifeng Yuan
|
Jianfei He
|
Ziqiang Cao
|
Wenjie Li
|
Xiaohua Jia
Findings of the Association for Computational Linguistics: EMNLP 2023
Abstractive Text Summarization (ATS) models are commonly trained using large-scale data that is randomly shuffled. However, the impact of data selection and data ordering on ATS models remains a relatively unexplored research area, where a significant challenge lies in accurately assessing the learning difficulty of each training instance. This study introduces a Data Selection Curriculum (DSC) scoring system that incorporates both the difficulty of improving ATS model via an instance and the expected performance on this instance. By selectively excluding excessively simple and overly complex instances, the training efficiency can be optimized. Furthermore, curriculum learning is integrated to accelerate convergence and improve performance by gradually increasing the learning difficulty, inspired by human learners. Experimental results on the CNN/DailyMail dataset demonstrate that our approach surpasses potent baselines, utilizing a mere 20% of the available instances.
pdf
bib
abs
Empirical Analysis of Beam Search Curse and Search Errors with Model Errors in Neural Machine Translation
Jianfei He
|
Shichao Sun
|
Xiaohua Jia
|
Wenjie Li
Proceedings of the 24th Annual Conference of the European Association for Machine Translation
Beam search is the most popular decoding method for Neural Machine Translation (NMT) and is still a strong baseline compared with the newly proposed sampling-based methods. To better understand beam search, we investigate its two well-recognized issues, beam search curse and search errors, at the sentence level. We find that only less than 30% of sentences in the test set experience these issues. Meanwhile, there is a related phenomenon. For the majority of sentences, their gold references have lower probabilities than the predictions from beam search. We also test with different levels of model errors including a special test using training samples and models without regularization. We find that these phenomena still exist even for a model with an accuracy of 95% although they are mitigated. These findings show that it is not promising to improve beam search by seeking higher probabilities in searching and further reducing its search errors. The relationship between the quality and the probability of predictions at the sentence level in our results provides useful information to find new ways to improve NMT.