Ahsaas Bajaj
2021
Long Document Summarization in a Low Resource Setting using Pretrained Language Models
Ahsaas Bajaj
|
Pavitra Dangati
|
Kalpesh Krishna
|
Pradhiksha Ashok Kumar
|
Rheeya Uppaal
|
Bradford Windsor
|
Eliot Brenner
|
Dominic Dotterrer
|
Rajarshi Das
|
Andrew McCallum
Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing: Student Research Workshop
Abstractive summarization is the task of compressing a long document into a coherent short document while retaining salient information. Modern abstractive summarization methods are based on deep neural networks which often require large training datasets. Since collecting summarization datasets is an expensive and time-consuming task, practical industrial settings are usually low-resource. In this paper, we study a challenging low-resource setting of summarizing long legal briefs with an average source document length of 4268 words and only 120 available (document, summary) pairs. To account for data scarcity, we used a modern pre-trained abstractive summarizer BART, which only achieves 17.9 ROUGE-L as it struggles with long documents. We thus attempt to compress these long documents by identifying salient sentences in the source which best ground the summary, using a novel algorithm based on GPT-2 language model perplexity scores, that operates within the low resource regime. On feeding the compressed documents to BART, we observe a 6.0 ROUGE-L improvement. Our method also beats several competitive salience detection baselines. Furthermore, the identified salient sentences tend to agree with independent human labeling by domain experts.
2020
An Instance Level Approach for Shallow Semantic Parsing in Scientific Procedural Text
Daivik Swarup
|
Ahsaas Bajaj
|
Sheshera Mysore
|
Tim O’Gorman
|
Rajarshi Das
|
Andrew McCallum
Findings of the Association for Computational Linguistics: EMNLP 2020
In specific domains, such as procedural scientific text, human labeled data for shallow semantic parsing is especially limited and expensive to create. Fortunately, such specific domains often use rather formulaic writing, such that the different ways of expressing relations in a small number of grammatically similar labeled sentences may provide high coverage of semantic structures in the corpus, through an appropriately rich similarity metric. In light of this opportunity, this paper explores an instance-based approach to the relation prediction sub-task within shallow semantic parsing, in which semantic labels from structurally similar sentences in the training set are copied to test sentences. Candidate similar sentences are retrieved using SciBERT embeddings. For labels where it is possible to copy from a similar sentence we employ an instance level copy network, when this is not possible, a globally shared parametric model is employed. Experiments show our approach outperforms both baseline and prior methods by 0.75 to 3 F1 absolute in the Wet Lab Protocol Corpus and 1 F1 absolute in the Materials Science Procedural Text Corpus.
Search
Co-authors
- Rajarshi Das 2
- Andrew Mccallum 2
- Pavitra Dangati 1
- Kalpesh Krishna 1
- Pradhiksha Ashok Kumar 1
- show all...