Noobs at Semeval-2021 Task 4: Masked Language Modeling for abstract answer prediction

Shikhar Shukla, Sarthak Sarthak, Karm Veer Arya


Abstract
This paper presents the system developed by our team for Semeval 2021 Task 4: Reading Comprehension of Abstract Meaning. The aim of the task was to benchmark the NLP techniques in understanding the abstract concepts present in a passage, and then predict the missing word in a human written summary of the passage. We trained a Roberta-Large model trained with a masked language modeling objective. In cases where this model failed to predict one of the available options, another Roberta-Large model trained as a binary classifier was used to predict correct and incorrect options. We used passage summary generated by Pegasus model and question as inputs. Our best solution was an ensemble of these 2 systems. We achieved an accuracy of 86.22% on subtask 1 and 87.10% on subtask 2.
Anthology ID:
2021.semeval-1.107
Volume:
Proceedings of the 15th International Workshop on Semantic Evaluation (SemEval-2021)
Month:
August
Year:
2021
Address:
Online
Editors:
Alexis Palmer, Nathan Schneider, Natalie Schluter, Guy Emerson, Aurelie Herbelot, Xiaodan Zhu
Venue:
SemEval
SIG:
SIGLEX
Publisher:
Association for Computational Linguistics
Note:
Pages:
805–809
Language:
URL:
https://aclanthology.org/2021.semeval-1.107
DOI:
10.18653/v1/2021.semeval-1.107
Bibkey:
Cite (ACL):
Shikhar Shukla, Sarthak Sarthak, and Karm Veer Arya. 2021. Noobs at Semeval-2021 Task 4: Masked Language Modeling for abstract answer prediction. In Proceedings of the 15th International Workshop on Semantic Evaluation (SemEval-2021), pages 805–809, Online. Association for Computational Linguistics.
Cite (Informal):
Noobs at Semeval-2021 Task 4: Masked Language Modeling for abstract answer prediction (Shukla et al., SemEval 2021)
Copy Citation:
PDF:
https://aclanthology.org/2021.semeval-1.107.pdf