Kartik Perisetla
2021
Entity-Based Knowledge Conflicts in Question Answering
Shayne Longpre
|
Kartik Perisetla
|
Anthony Chen
|
Nikhil Ramesh
|
Chris DuBois
|
Sameer Singh
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing
Knowledge-dependent tasks typically use two sources of knowledge: parametric, learned at training time, and contextual, given as a passage at inference time. To understand how models use these sources together, we formalize the problem of knowledge conflicts, where the contextual information contradicts the learned information. Analyzing the behaviour of popular models, we measure their over-reliance on memorized information (the cause of hallucinations), and uncover important factors that exacerbate this behaviour. Lastly, we propose a simple method to mitigate over-reliance on parametric knowledge, which minimizes hallucination, and improves out-of-distribution generalization by 4% - 7%. Our findings demonstrate the importance for practitioners to evaluate model tendency to hallucinate rather than read, and show that our mitigation strategy encourages generalization to evolving information (i.e. time-dependent queries). To encourage these practices, we have released our framework for generating knowledge conflicts.
Search