Xuan Ren
2024
I Learn Better If You Speak My Language: Understanding the Superior Performance of Fine-Tuning Large Language Models with LLM-Generated Responses
Xuan Ren
|
Biao Wu
|
Lingqiao Liu
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
This paper explores an intriguing observation: fine-tuning a large language model (LLM) with responses generated by a LLM often yields better results than using responses generated by humans, particularly in reasoning tasks. We conduct an in-depth investigation to understand why this occurs. Contrary to the common belief that these instances is due to the more detailed nature of LLM-generated content, our study identifies another contributing factor: an LLM is inherently more “familiar” with LLM generated responses. This familiarity is evidenced by lower perplexity before fine-tuning. We design a series of experiments to understand the impact of the “familiarity” and our conclusion reveals that this “familiarity” significantly impacts learning performance. Training with LLM-generated responses not only enhances performance but also helps maintain the model’s capabilities in other reasoning tasks after fine-tuning on a specific task.
2023
Out-of-Distribution Generalization in Natural Language Processing: Past, Present, and Future
Linyi Yang
|
Yaoxian Song
|
Xuan Ren
|
Chenyang Lyu
|
Yidong Wang
|
Jingming Zhuo
|
Lingqiao Liu
|
Jindong Wang
|
Jennifer Foster
|
Yue Zhang
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing
Machine learning (ML) systems in natural language processing (NLP) face significant challenges in generalizing to out-of-distribution (OOD) data, where the test distribution differs from the training data distribution. This poses important questions about the robustness of NLP models and their high accuracy, which may be artificially inflated due to their underlying sensitivity to systematic biases. Despite these challenges, there is a lack of comprehensive surveys on the generalization challenge from an OOD perspective in natural language understanding. Therefore, this paper aims to fill this gap by presenting the first comprehensive review of recent progress, methods, and evaluations on this topic. We further discuss the challenges involved and potential future research directions. By providing convenient access to existing work, we hope this survey will encourage future research in this area.
Search
Co-authors
- Lingqiao Liu 2
- Biao Wu 1
- Linyi Yang 1
- Yaoxian Song 1
- Chenyang Lyu 1
- show all...