Will It Blend? Mixing Training Paradigms & Prompting for Argument Quality Prediction

Michiel van der Meer, Myrthe Reuver, Urja Khurana, Lea Krause, Selene Baez Santamaria


Abstract
This paper describes our contributions to the Shared Task of the 9th Workshop on Argument Mining (2022). Our approach uses Large Language Models for the task of Argument Quality Prediction. We perform prompt engineering using GPT-3, and also investigate the training paradigms multi-task learning, contrastive learning, and intermediate-task training. We find that a mixed prediction setup outperforms single models. Prompting GPT-3 works best for predicting argument validity, and argument novelty is best estimated by a model trained using all three training paradigms.
Anthology ID:
2022.argmining-1.8
Volume:
Proceedings of the 9th Workshop on Argument Mining
Month:
October
Year:
2022
Address:
Online and in Gyeongju, Republic of Korea
Editors:
Gabriella Lapesa, Jodi Schneider, Yohan Jo, Sougata Saha
Venue:
ArgMining
SIG:
Publisher:
International Conference on Computational Linguistics
Note:
Pages:
95–103
Language:
URL:
https://aclanthology.org/2022.argmining-1.8
DOI:
Bibkey:
Cite (ACL):
Michiel van der Meer, Myrthe Reuver, Urja Khurana, Lea Krause, and Selene Baez Santamaria. 2022. Will It Blend? Mixing Training Paradigms & Prompting for Argument Quality Prediction. In Proceedings of the 9th Workshop on Argument Mining, pages 95–103, Online and in Gyeongju, Republic of Korea. International Conference on Computational Linguistics.
Cite (Informal):
Will It Blend? Mixing Training Paradigms & Prompting for Argument Quality Prediction (van der Meer et al., ArgMining 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.argmining-1.8.pdf
Data
MultiNLI