%0 Conference Proceedings %T Summary-Source Proposition-level Alignment: Task, Datasets and Supervised Baseline %A Ernst, Ori %A Shapira, Ori %A Pasunuru, Ramakanth %A Lepioshkin, Michael %A Goldberger, Jacob %A Bansal, Mohit %A Dagan, Ido %Y Bisazza, Arianna %Y Abend, Omri %S Proceedings of the 25th Conference on Computational Natural Language Learning %D 2021 %8 November %I Association for Computational Linguistics %C Online %F ernst-etal-2021-summary %X Aligning sentences in a reference summary with their counterparts in source documents was shown as a useful auxiliary summarization task, notably for generating training data for salience detection. Despite its assessed utility, the alignment step was mostly approached with heuristic unsupervised methods, typically ROUGE-based, and was never independently optimized or evaluated. In this paper, we propose establishing summary-source alignment as an explicit task, while introducing two major novelties: (1) applying it at the more accurate proposition span level, and (2) approaching it as a supervised classification task. To that end, we created a novel training dataset for proposition-level alignment, derived automatically from available summarization evaluation data. In addition, we crowdsourced dev and test datasets, enabling model development and proper evaluation. Utilizing these data, we present a supervised proposition alignment baseline model, showing improved alignment-quality over the unsupervised approach. %R 10.18653/v1/2021.conll-1.25 %U https://aclanthology.org/2021.conll-1.25 %U https://doi.org/10.18653/v1/2021.conll-1.25 %P 310-322