Quantitative Day Trading from Natural Language using Reinforcement Learning

Ramit Sawhney, Arnav Wadhwa, Shivam Agarwal, Rajiv Ratn Shah


Abstract
It is challenging to design profitable and practical trading strategies, as stock price movements are highly stochastic, and the market is heavily influenced by chaotic data across sources like news and social media. Existing NLP approaches largely treat stock prediction as a classification or regression problem and are not optimized to make profitable investment decisions. Further, they do not model the temporal dynamics of large volumes of diversely influential text to which the market responds quickly. Building on these shortcomings, we propose a deep reinforcement learning approach that makes time-aware decisions to trade stocks while optimizing profit using textual data. Our method outperforms state-of-the-art in terms of risk-adjusted returns in trading simulations on two benchmarks: Tweets (English) and financial news (Chinese) pertaining to two major indexes and four global stock markets. Through extensive experiments and studies, we build the case for our method as a tool for quantitative trading.
Anthology ID:
2021.naacl-main.316
Volume:
Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
Month:
June
Year:
2021
Address:
Online
Venue:
NAACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
4018–4030
Language:
URL:
https://aclanthology.org/2021.naacl-main.316
DOI:
10.18653/v1/2021.naacl-main.316
Bibkey:
Cite (ACL):
Ramit Sawhney, Arnav Wadhwa, Shivam Agarwal, and Rajiv Ratn Shah. 2021. Quantitative Day Trading from Natural Language using Reinforcement Learning. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4018–4030, Online. Association for Computational Linguistics.
Cite (Informal):
Quantitative Day Trading from Natural Language using Reinforcement Learning (Sawhney et al., NAACL 2021)
Copy Citation:
PDF:
https://aclanthology.org/2021.naacl-main.316.pdf