Zhuwei Rao
2024
Reassess Summary Factual Inconsistency Detection with Large Language Model
Jiuding Yang
|
Hui Liu
|
Weidong Guo
|
Zhuwei Rao
|
Yu Xu
|
Di Niu
Proceedings of the 1st Workshop on Towards Knowledgeable Language Models (KnowLLM 2024)
Ensuring factual consistency between the summary and the original document is paramount in summarization tasks. Consequently, considerable effort has been dedicated to detecting inconsistencies. With the advent of Large Language Models (LLMs), recent studies have begun to leverage their advanced language understanding capabilities for inconsistency detection. However, early attempts have shown that LLMs underperform traditional models due to their limited ability to follow instructions and the absence of an effective detection methodology. In this study, we reassess summary inconsistency detection with LLMs, comparing the performances of GPT-3.5 and GPT-4. To advance research in LLM-based inconsistency detection, we propose SIFiD (Summary Inconsistency Detection with Filtered Document) that identify key sentences within documents by either employing natural language inference or measuring semantic similarity between summaries and documents.
Instruction Fusion: Advancing Prompt Evolution through Hybridization
Weidong Guo
|
Jiuding Yang
|
Kaitong Yang
|
Xiangyang Li
|
Zhuwei Rao
|
Yu Xu
|
Di Niu
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
The fine-tuning of Large Language Models (LLMs) specialized in code generation has seen notable advancements through the use of open-domain coding queries. Despite the successes, existing methodologies like Evol-Instruct encounter performance limitations, impeding further enhancements in code generation tasks. This paper examines the constraints of existing prompt evolution techniques and introduces a novel approach, Instruction Fusion (IF). IF innovatively combines two distinct prompts through a hybridization process, thereby enhancing the evolution of training prompts for code LLMs. Our experimental results reveal that the proposed novel method effectively addresses the shortcomings of prior methods, significantly improving the performance of Code LLMs across five code generation benchmarks, namely HumanEval, HumanEval+, MBPP, MBPP+ and MultiPL-E, which underscore the effectiveness of Instruction Fusion in advancing the capabilities of LLMs in code generation.
Search
Co-authors
- Jiuding Yang 2
- Weidong Guo 2
- Yu Xu 2
- Di Niu 2
- Hui Liu 1
- show all...