ML Mob at SemEval-2023 Task 5: “Breaking News: Our Semi-Supervised and Multi-Task Learning Approach Spoils Clickbait”

Hannah Sterz, Leonard Bongard, Tobias Werner, Clifton Poth, Martin Hentschel


Abstract
Online articles using striking headlines that promise intriguing information are often used to attract readers. Most of the time, the information provided in the text is disappointing to the reader after the headline promised exciting news. As part of the SemEval-2023 challenge, we propose a system to generate a spoiler for these headlines. The spoiler provides the information promised by the headline and eliminates the need to read the full article. We consider Multi-Task Learning and generating more data using a distillation approach in our system. With this, we achieve an F1 score up to 51.48% on extracting the spoiler from the articles.
Anthology ID:
2023.semeval-1.251
Volume:
Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)
Month:
July
Year:
2023
Address:
Toronto, Canada
Editors:
Atul Kr. Ojha, A. Seza Doğruöz, Giovanni Da San Martino, Harish Tayyar Madabushi, Ritesh Kumar, Elisa Sartori
Venue:
SemEval
SIG:
SIGLEX
Publisher:
Association for Computational Linguistics
Note:
Pages:
1818–1823
Language:
URL:
https://aclanthology.org/2023.semeval-1.251
DOI:
10.18653/v1/2023.semeval-1.251
Bibkey:
Cite (ACL):
Hannah Sterz, Leonard Bongard, Tobias Werner, Clifton Poth, and Martin Hentschel. 2023. ML Mob at SemEval-2023 Task 5: “Breaking News: Our Semi-Supervised and Multi-Task Learning Approach Spoils Clickbait”. In Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023), pages 1818–1823, Toronto, Canada. Association for Computational Linguistics.
Cite (Informal):
ML Mob at SemEval-2023 Task 5: “Breaking News: Our Semi-Supervised and Multi-Task Learning Approach Spoils Clickbait” (Sterz et al., SemEval 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.semeval-1.251.pdf