Benjamin Irving
2024
MEANT: Multimodal Encoder for Antecedent Information
Benjamin Irving
|
Annika Marie Schoene
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
The stock market provides a rich well of information that can be split across modalities, making it an ideal candidate for multimodal evaluation. Multimodal data plays an increasingly important role in the development of machine learning and has shown to positively impact performance. But information can do more than exist across modes— it can exist across time. How should we attend to temporal data that consists of multiple information types? This work introduces (i) the MEANT model, a Multimodal Encoder for Antecedent information and (ii) a new dataset called TempStock, which consists of price, Tweets, and graphical data with over a million Tweets from all of the companies in the S&P 500 Index. We find that MEANT improves performance on existing baselines by over 15%, and that the textual information affects performance far more than the visual information on our time-dependent task from our ablation study. The code and dataset will be made available upon publication.
Related Work Is All You Need
Rodolfo Joel Zevallos
|
John E. Ortega
|
Benjamin Irving
Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)
In modern times, generational artificial intelligence is used in several industries and by many people. One use case that can be considered important but somewhat redundant is the act of searching for related work and other references to cite. As an avenue to better ascertain the value of citations and their corresponding locations, we focus on the common “related work” section as a focus of experimentation with the overall objective to generate the section. In this article, we present a corpus with 400k annotations of that distinguish related work from the rest of the references. Additionally, we show that for the papers in our experiments, the related work section represents the paper just as good, and in many cases, better than the rest of the references. We show that this is the case for more than 74% of the articles when using cosine similarity to measure the distance between two common graph neural network algorithms: Prone and Specter.
Search