AI4Finance-LLC / FinRL-Library
- среда, 25 ноября 2020 г. в 00:32:26
Jupyter Notebook
A Deep Reinforcement Learning Library for Automated Stock Trading in Quantitative Finance
This repository refers to the codes for our paper that appears at Deep RL Workshop, NeurIPS 2020
As deep reinforcement learning (DRL) has been recognized as an effective approach in quantitative finance, getting hands-on experiences is attractive to beginners. However, to train a practical DRL trading agent that decides where to trade, at what price, and what quantity involves error-prone and arduous developmentand debugging.
In this paper, we introduce a DRL library FinRL that facilitates beginners to expose themselves to quantitative finance and to develop their own stock trading strategies. Along with easily-reproducible tutorials, FinRL library allows users to streamline their own developments and to compare with existing schemes easily. Within FinRL, virtual environments are configured with stock market datasets, trading agents are trained with neural networks, and extensive backtesting is analyzed via trading performance. Moreover, it incorporates important trading constraints such as transaction cost, market liquidity and the investor’s degree of risk-aversion.
FinRL is featured with completeness, hands-on tutorial and reproducibility that favors beginners: (i) at multiple levels of time granularity, FinRL simulates trading environments across various stock markets, including NASDAQ-100, DJIA, S&P 500, HSI, SSE 50, and CSI 300; (ii) organized in a layered architecture with modular structure, FinRL provides fine-tuned state-of-the-art DRL algorithms (DQN, DDPG, PPO, SAC, A2C, TD3, etc.), commonly-used reward functions and standard evaluation baselines to alleviate the debugging work-loads and promote the reproducibility, and (iii) being highly extendable, FinRL reserves a complete set of user-import interfaces.
Furthermore, we incorporated three application demonstrations, namely single stock trading, multiple stock trading, and portfolio allocation.
Clone this repository
git clone https://github.com/AI4Finance-LLC/FinRL-Library.git
Install the unstable development version of FinRL:
pip install git+https://github.com/AI4Finance-LLC/FinRL-Library.git
For OpenAI Baselines, you'll need system packages CMake, OpenMPI and zlib. Those can be installed as follows
sudo apt-get update && sudo apt-get install cmake libopenmpi-dev python3-dev zlib1g-dev libgl1-mesa-glx
Installation of system packages on Mac requires Homebrew. With Homebrew installed, run the following:
brew install cmake openmpi
To install stable-baselines on Windows, please look at the documentation.
cd into this repository
cd FinRL-Library
Under folder /FinRL-Library, create a virtual environment
pip install virtualenv
Virtualenvs are essentially folders that have copies of python executable and all python packages.
Virtualenvs can also avoid packages conflicts.
Create a virtualenv venv under folder /FinRL-Library
virtualenv -p python3 venv
To activate a virtualenv:
source venv/bin/activate
The script has been tested running under Python >= 3.6.0, with the folowing packages installed:
pip install -r requirements.txt
If you have questions regarding TensorFlow, note that tensorflow 2.0 is not compatible now, you may use
pip install tensorflow==1.15.4
If you have questions regarding Stable-baselines package, please refer to Stable-baselines installation guide. Install the Stable Baselines package using pip:
pip install stable-baselines[mpi]
This includes an optional dependency on MPI, enabling algorithms DDPG, GAIL, PPO1 and TRPO. If you do not need these algorithms, you can install without MPI:
pip install stable-baselines
Please read the documentation for more details and alternatives (from source, using docker).
python main.py --mode=train
Use Quantopian's pyfolio package to do the backtesting.
The stock data we use is pulled from Yahoo Finance API
@article{finrl2020,
author = {Liu, Xiao-Yang and Yang, Hongyang and Chen, Qian and Zhang, Runjia and Yang, Liuqing and Xiao, Bowen and Wang, Christina Dan},
journal = {Deep RL Workshop, NeurIPS 2020},
title = {{FinRL: A Deep Reinforcement Learning Library forAutomated Stock Trading in Quantitative Finance}},
url = {},
year = {2020}
}