Zhiwei Li
2024
Can Large Language Models Mine Interpretable Financial Factors More Effectively? A Neural-Symbolic Factor Mining Agent Model
Zhiwei Li
|
Ran Song
|
Caihong Sun
|
Wei Xu
|
Zhengtao Yu
|
Ji-Rong Wen
Findings of the Association for Computational Linguistics: ACL 2024
Finding interpretable factors for stock returns is the most vital issue in the empirical asset pricing domain. As data-driven methods, existing factor mining models can be categorized into symbol-based and neural-based models. Symbol-based models are interpretable but inefficient, while neural-based approaches are efficient but lack interpretability. Hence, mining interpretable factors effectively presents a significant challenge. Inspired by the success of Large Language Models (LLMs) in various tasks, we propose a FActor Mining Agent (FAMA) model that enables LLMs to integrate the strengths of both neural and symbolic models for factor mining. In this paper, FAMA consists of two main components: Cross-Sample Selection (CSS) and Chain-of-Experience (CoE). CSS addresses the homogeneity challenges in LLMs during factor mining by assimilating diverse factors as in-context samples, whereas CoE enables LLMs to leverage past successful mining experiences, expediting the mining of effective factors. Experimental evaluations on real-world stock market data demonstrate the effectiveness of our approach by surpassing the SOTA RankIC by 0.006 and RankICIR by 0.105 in predicting S&P 500 returns. Furthermore, the investment simulation shows that our model can achieve superior performance with an annualized return of 38.4% and a Sharpe ratio of 667.2%.
2009
LogisticLDA: Regularizing Latent Dirichlet Allocation by Logistic Regression
Jia-Cheng Guo
|
Bao-Liang Lu
|
Zhiwei Li
|
Lei Zhang
Proceedings of the 23rd Pacific Asia Conference on Language, Information and Computation, Volume 1
Search
Co-authors
- Ran Song 1
- Caihong Sun 1
- Wei Xu 1
- Zhengtao Yu 1
- Ji-Rong Wen 1
- show all...