CitRet: A Hybrid Model for Cited Text Span Retrieval

Amit Pandey, Avani Gupta, Vikram Pudi


Abstract
The paper aims to identify cited text spans in the reference paper related to the given citance in the citing paper. We refer to it as cited text span retrieval (CTSR). Most current methods attempt this task by relying on pre-trained off-the-shelf deep learning models like SciBERT. Though these models are pre-trained on large datasets, they under-perform in out-of-domain settings. We introduce CitRet, a novel hybrid model for CTSR that leverages unique semantic and syntactic structural characteristics of scientific documents. This enables us to use significantly less data for finetuning. We use only 1040 documents for finetuning. Our model augments mildly-trained SBERT-based contextual embeddings with pre-trained non-contextual Word2Vec embeddings to calculate semantic textual similarity. We demonstrate the performance of our model on the CLSciSumm shared tasks. It improves the state-of-the-art results by over 15% on the F1 score evaluation.
Anthology ID:
2022.coling-1.399
Volume:
Proceedings of the 29th International Conference on Computational Linguistics
Month:
October
Year:
2022
Address:
Gyeongju, Republic of Korea
Venue:
COLING
SIG:
Publisher:
International Committee on Computational Linguistics
Note:
Pages:
4528–4536
Language:
URL:
https://aclanthology.org/2022.coling-1.399
DOI:
Bibkey:
Cite (ACL):
Amit Pandey, Avani Gupta, and Vikram Pudi. 2022. CitRet: A Hybrid Model for Cited Text Span Retrieval. In Proceedings of the 29th International Conference on Computational Linguistics, pages 4528–4536, Gyeongju, Republic of Korea. International Committee on Computational Linguistics.
Cite (Informal):
CitRet: A Hybrid Model for Cited Text Span Retrieval (Pandey et al., COLING 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.coling-1.399.pdf
Code
 amitpandey-research/citret_public