%0 Conference Proceedings %T Probing Factually Grounded Content Transfer with Factual Ablation %A West, Peter %A Quirk, Chris %A Galley, Michel %A Choi, Yejin %Y Muresan, Smaranda %Y Nakov, Preslav %Y Villavicencio, Aline %S Findings of the Association for Computational Linguistics: ACL 2022 %D 2022 %8 May %I Association for Computational Linguistics %C Dublin, Ireland %F west-etal-2022-probing %X Despite recent success, large neural models often generate factually incorrect text. Compounding this is the lack of a standard automatic evaluation for factuality–it cannot be meaningfully improved if it cannot be measured. Grounded generation promises a path to solving both of these problems: models draw on a reliable external document (grounding) for factual information, simplifying the challenge of factuality. Measuring factuality is also simplified–to factual consistency, testing whether the generation agrees with the grounding, rather than all facts. Yet, without a standard automatic metric for factual consistency, factually grounded generation remains an open problem. We study this problem for content transfer, in which generations extend a prompt, using information from factual grounding. Particularly, this domain allows us to introduce the notion of factual ablation for automatically measuring factual consistency: this captures the intuition that the model should be less likely to produce an output given a less relevant grounding document. In practice, we measure this by presenting a model with two grounding documents, and the model should prefer to use the more factually relevant one. We contribute two evaluation sets to measure this. Applying our new evaluation, we propose multiple novel methods improving over strong baselines. %R 10.18653/v1/2022.findings-acl.294 %U https://aclanthology.org/2022.findings-acl.294 %U https://doi.org/10.18653/v1/2022.findings-acl.294 %P 3732-3746