%0 Conference Proceedings %T Detection, Disambiguation, Re-ranking: Autoregressive Entity Linking as a Multi-Task Problem %A Mrini, Khalil %A Nie, Shaoliang %A Gu, Jiatao %A Wang, Sinong %A Sanjabi, Maziar %A Firooz, Hamed %Y Muresan, Smaranda %Y Nakov, Preslav %Y Villavicencio, Aline %S Findings of the Association for Computational Linguistics: ACL 2022 %D 2022 %8 May %I Association for Computational Linguistics %C Dublin, Ireland %F mrini-etal-2022-detection %X We propose an autoregressive entity linking model, that is trained with two auxiliary tasks, and learns to re-rank generated samples at inference time. Our proposed novelties address two weaknesses in the literature. First, a recent method proposes to learn mention detection and then entity candidate selection, but relies on predefined sets of candidates. We use encoder-decoder autoregressive entity linking in order to bypass this need, and propose to train mention detection as an auxiliary task instead. Second, previous work suggests that re-ranking could help correct prediction errors. We add a new, auxiliary task, match prediction, to learn re-ranking. Without the use of a knowledge base or candidate sets, our model sets a new state of the art in two benchmark datasets of entity linking: COMETA in the biomedical domain, and AIDA-CoNLL in the news domain. We show through ablation studies that each of the two auxiliary tasks increases performance, and that re-ranking is an important factor to the increase. Finally, our low-resource experimental results suggest that performance on the main task benefits from the knowledge learned by the auxiliary tasks, and not just from the additional training data. %R 10.18653/v1/2022.findings-acl.156 %U https://aclanthology.org/2022.findings-acl.156 %U https://doi.org/10.18653/v1/2022.findings-acl.156 %P 1972-1983