Argument Novelty and Validity Assessment via Multitask and Transfer Learning

Milad Alshomary, Maja Stahl


Abstract
An argument is a constellation of premises reasoning towards a certain conclusion. The automatic generation of conclusions is becoming a very prominent task, raising the need for automatic measures to assess the quality of these generated conclusions. The SharedTask at the 9th Workshop on Argument Mining proposes a new task to assess the novelty and validity of a conclusion given a set of premises. In this paper, we present a multitask learning approach that transfers the knowledge learned from the natural language inference task to the tasks at hand. Evaluation results indicate the importance of both knowledge transfer and joint learning, placing our approach in the fifth place with strong results compared to baselines.
Anthology ID:
2022.argmining-1.10
Volume:
Proceedings of the 9th Workshop on Argument Mining
Month:
October
Year:
2022
Address:
Online and in Gyeongju, Republic of Korea
Editors:
Gabriella Lapesa, Jodi Schneider, Yohan Jo, Sougata Saha
Venue:
ArgMining
SIG:
Publisher:
International Conference on Computational Linguistics
Note:
Pages:
111–114
Language:
URL:
https://aclanthology.org/2022.argmining-1.10
DOI:
Bibkey:
Cite (ACL):
Milad Alshomary and Maja Stahl. 2022. Argument Novelty and Validity Assessment via Multitask and Transfer Learning. In Proceedings of the 9th Workshop on Argument Mining, pages 111–114, Online and in Gyeongju, Republic of Korea. International Conference on Computational Linguistics.
Cite (Informal):
Argument Novelty and Validity Assessment via Multitask and Transfer Learning (Alshomary & Stahl, ArgMining 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.argmining-1.10.pdf
Data
MultiNLI