Stanford MLab at SemEval 2022 Task 7: Tree- and Transformer-Based Methods for Clarification Plausibility

Thomas Yim, Junha Lee, Rishi Verma, Scott Hickmann, Annie Zhu, Camron Sallade, Ian Ng, Ryan Chi, Patrick Liu


Abstract
In this paper, we detail the methods we used to determine the idiomaticity and plausibility of candidate words or phrases into an instructional text as part of the SemEval Task 7: Identifying Plausible Clarifications of Implicit and Underspecified Phrases in Instructional Texts. Given a set of steps in an instructional text, there are certain phrases that most plausibly fill that spot. We explored various possible architectures, including tree-based methods over GloVe embeddings, ensembled BERT and ELECTRA models, and GPT 2-based infilling methods.
Anthology ID:
2022.semeval-1.150
Volume:
Proceedings of the 16th International Workshop on Semantic Evaluation (SemEval-2022)
Month:
July
Year:
2022
Address:
Seattle, United States
Venues:
NAACL | SemEval
SIGs:
SIGSEM | SIGLEX
Publisher:
Association for Computational Linguistics
Note:
Pages:
1067–1070
Language:
URL:
https://aclanthology.org/2022.semeval-1.150
DOI:
10.18653/v1/2022.semeval-1.150
Bibkey:
Cite (ACL):
Thomas Yim, Junha Lee, Rishi Verma, Scott Hickmann, Annie Zhu, Camron Sallade, Ian Ng, Ryan Chi, and Patrick Liu. 2022. Stanford MLab at SemEval 2022 Task 7: Tree- and Transformer-Based Methods for Clarification Plausibility. In Proceedings of the 16th International Workshop on Semantic Evaluation (SemEval-2022), pages 1067–1070, Seattle, United States. Association for Computational Linguistics.
Cite (Informal):
Stanford MLab at SemEval 2022 Task 7: Tree- and Transformer-Based Methods for Clarification Plausibility (Yim et al., SemEval 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.semeval-1.150.pdf