MEANT: Multimodal Encoder for Antecedent Information

Benjamin Irving, Annika Schoene


Abstract
The stock market provides a rich well of information that can be split across modalities, making it an ideal candidate for multimodal evaluation. Multimodal data plays an increasingly important role in the development of machine learning and has shown to positively impact performance. But information can do more than exist across modes— it can exist across time. How should we attend to temporal data that consists of multiple information types? This work introduces (i) the MEANT model, a Multimodal Encoder for Antecedent information and (ii) a new dataset called TempStock, which consists of price, Tweets, and graphical data with over a million Tweets from all of the companies in the S&P 500 Index. We find that MEANT improves performance on existing baselines by over 15%, and that the textual information affects performance far more than the visual information on our time-dependent task from our ablation study. The code and dataset will be made available upon publication.
Anthology ID:
2024.emnlp-main.488
Volume:
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2024
Address:
Miami, Florida, USA
Editors:
Yaser Al-Onaizan, Mohit Bansal, Yun-Nung Chen
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
8579–8600
Language:
URL:
https://aclanthology.org/2024.emnlp-main.488
DOI:
Bibkey:
Cite (ACL):
Benjamin Irving and Annika Schoene. 2024. MEANT: Multimodal Encoder for Antecedent Information. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, pages 8579–8600, Miami, Florida, USA. Association for Computational Linguistics.
Cite (Informal):
MEANT: Multimodal Encoder for Antecedent Information (Irving & Schoene, EMNLP 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.emnlp-main.488.pdf