Matt Ryan
2023
Augmenting pre-trained language models with audio feature embedding for argumentation mining in political debates
Rafael Mestre
|
Stuart E. Middleton
|
Matt Ryan
|
Masood Gheasi
|
Timothy Norman
|
Jiatong Zhu
Findings of the Association for Computational Linguistics: EACL 2023
The integration of multimodality in natural language processing (NLP) tasks seeks to exploit the complementary information contained in two or more modalities, such as text, audio and video. This paper investigates the integration of often under-researched audio features with text, using the task of argumentation mining (AM) as a case study. We take a previously reported dataset and present an audio-enhanced version (the Multimodal USElecDeb60To16 dataset). We report the performance of two text models based on BERT and GloVe embeddings, one audio model (based on CNN and Bi-LSTM) and multimodal combinations, on a dataset of 28,850 utterances. The results show that multimodal models do not outperform text-based models when using the full dataset. However, we show that audio features add value in fully supervised scenarios with limited data. We find that when data is scarce (e.g. with 10% of the original dataset) multimodal models yield improved performance, whereas text models based on BERT considerably decrease performance. Finally, we conduct a study with artificially generated voices and an ablation study to investigate the importance of different audio features in the audio models.
2021
M-Arg: Multimodal Argument Mining Dataset for Political Debates with Audio and Transcripts
Rafael Mestre
|
Razvan Milicin
|
Stuart E. Middleton
|
Matt Ryan
|
Jiatong Zhu
|
Timothy J. Norman
Proceedings of the 8th Workshop on Argument Mining
Argumentation mining aims at extracting, analysing and modelling people’s arguments, but large, high-quality annotated datasets are limited, and no multimodal datasets exist for this task. In this paper, we present M-Arg, a multimodal argument mining dataset with a corpus of US 2020 presidential debates, annotated through crowd-sourced annotations. This dataset allows models to be trained to extract arguments from natural dialogue such as debates using information like the intonation and rhythm of the speaker. Our dataset contains 7 hours of annotated US presidential debates, 6527 utterances and 4104 relation labels, and we report results from different baseline models, namely a text-only model, an audio-only model and multimodal models that extract features from both text and audio. With accuracy reaching 0.86 in multimodal models, we find that audio features provide added value with respect to text-only models.
Search
Co-authors
- Rafael Mestre 2
- Stuart E. Middleton 2
- Jiatong Zhu 2
- Masood Gheasi 1
- Timothy Norman 1
- show all...