Extracting Financial Causality through QA: Insights from FinCausal 2025 Spanish Subtask

Marcelo Jose Moreno Aviles, Alejandro Vaca


Abstract
The methodology tested both span extraction and generative tasks, with generative models ultimately proving to be more effective. SuperLenia, a private generative model, was the best-performing model. It is a combination of public models with sizes ranging from 7B to 8B parameters. SuperLenia was fine-tuned using QLoRA in a chat-based framework, and hyperparameter tuned during inference, including adjustments to temperature and sampling, further enhanced its performance.
Anthology ID:
2025.finnlp-1.29
Volume:
Proceedings of the Joint Workshop of the 9th Financial Technology and Natural Language Processing (FinNLP), the 6th Financial Narrative Processing (FNP), and the 1st Workshop on Large Language Models for Finance and Legal (LLMFinLegal)
Month:
January
Year:
2025
Address:
Abu Dhabi, UAE
Editors:
Chung-Chi Chen, Antonio Moreno-Sandoval, Jimin Huang, Qianqian Xie, Sophia Ananiadou, Hsin-Hsi Chen
Venues:
FinNLP | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
265–270
Language:
URL:
https://aclanthology.org/2025.finnlp-1.29/
DOI:
Bibkey:
Cite (ACL):
Marcelo Jose Moreno Aviles and Alejandro Vaca. 2025. Extracting Financial Causality through QA: Insights from FinCausal 2025 Spanish Subtask. In Proceedings of the Joint Workshop of the 9th Financial Technology and Natural Language Processing (FinNLP), the 6th Financial Narrative Processing (FNP), and the 1st Workshop on Large Language Models for Finance and Legal (LLMFinLegal), pages 265–270, Abu Dhabi, UAE. Association for Computational Linguistics.
Cite (Informal):
Extracting Financial Causality through QA: Insights from FinCausal 2025 Spanish Subtask (Moreno Aviles & Vaca, FinNLP 2025)
Copy Citation:
PDF:
https://aclanthology.org/2025.finnlp-1.29.pdf