UIC-NLP at SemEval-2020 Task 10: Exploring an Alternate Perspective on Evaluation

Philip Hossu, Natalie Parde


Abstract
In this work we describe and analyze a supervised learning system for word emphasis selection in phrases drawn from visual media as a part of the Semeval 2020 Shared Task 10. More specifically, we begin by briefly introducing the shared task problem and provide an analysis of interesting and relevant features present in the training dataset. We then introduce our LSTM-based model and describe its structure, input features, and limitations. Our model ultimately failed to beat the benchmark score, achieving an average match() score of 0.704 on the validation data (0.659 on the test data) but predicted 84.8% of words correctly considering a 0.5 threshold. We conclude with a thorough analysis and discussion of erroneous predictions with many examples and visualizations.
Anthology ID:
2020.semeval-1.223
Volume:
Proceedings of the Fourteenth Workshop on Semantic Evaluation
Month:
December
Year:
2020
Address:
Barcelona (online)
Editors:
Aurelie Herbelot, Xiaodan Zhu, Alexis Palmer, Nathan Schneider, Jonathan May, Ekaterina Shutova
Venue:
SemEval
SIG:
SIGLEX
Publisher:
International Committee for Computational Linguistics
Note:
Pages:
1704–1709
Language:
URL:
https://aclanthology.org/2020.semeval-1.223
DOI:
10.18653/v1/2020.semeval-1.223
Bibkey:
Cite (ACL):
Philip Hossu and Natalie Parde. 2020. UIC-NLP at SemEval-2020 Task 10: Exploring an Alternate Perspective on Evaluation. In Proceedings of the Fourteenth Workshop on Semantic Evaluation, pages 1704–1709, Barcelona (online). International Committee for Computational Linguistics.
Cite (Informal):
UIC-NLP at SemEval-2020 Task 10: Exploring an Alternate Perspective on Evaluation (Hossu & Parde, SemEval 2020)
Copy Citation:
PDF:
https://aclanthology.org/2020.semeval-1.223.pdf