Scott Hickmann
2022
Stanford MLab at SemEval 2022 Task 7: Tree- and Transformer-Based Methods for Clarification Plausibility
Thomas Yim
|
Junha Lee
|
Rishi Verma
|
Scott Hickmann
|
Annie Zhu
|
Camron Sallade
|
Ian Ng
|
Ryan Chi
|
Patrick Liu
Proceedings of the 16th International Workshop on Semantic Evaluation (SemEval-2022)
In this paper, we detail the methods we used to determine the idiomaticity and plausibility of candidate words or phrases into an instructional text as part of the SemEval Task 7: Identifying Plausible Clarifications of Implicit and Underspecified Phrases in Instructional Texts. Given a set of steps in an instructional text, there are certain phrases that most plausibly fill that spot. We explored various possible architectures, including tree-based methods over GloVe embeddings, ensembled BERT and ELECTRA models, and GPT 2-based infilling methods.
Search
Co-authors
- Thomas Yim 1
- Junha Lee 1
- Rishi Verma 1
- Annie Zhu 1
- Camron Sallade 1
- show all...