Recent released features
We are excited to announce the release of RD-Agent📢, a powerful tool that supports automated factor mining and model optimization in quant investment R&D.
RD-Agent is now available on GitHub, and we welcome your star🌟!
To learn more, please visit our ♾️Demo page. Here, you will find demo videos in both English and Chinese to help you better understand the scenario and usage of RD-Agent.
We have prepared several demo videos for you:
Scenario | Demo video (English) | Demo video (中文) |
---|---|---|
Quant Factor Mining | Link | Link |
Quant Factor Mining from reports | Link | Link |
Quant Model Optimization | Link | Link |
Feature | Status |
---|---|
BPQP for End-to-end learning | 📈Coming soon!(Under review) |
🔥LLM-driven Auto Quant Factory🔥 | 🚀 Released in ♾️RD-Agent on Aug 8, 2024 |
KRNN and Sandwich models | 📈 Released on May 26, 2023 |
Release Qlib v0.9.0 | Released on Dec 9, 2022 |
RL Learning Framework | 🔨 📈 Released on Nov 10, 2022. #1332, #1322, #1316,#1299,#1263, #1244, #1169, #1125, #1076 |
HIST and IGMTF models | 📈 Released on Apr 10, 2022 |
Qlib notebook tutorial | 📖 Released on Apr 7, 2022 |
Ibovespa index data | 🍚 Released on Apr 6, 2022 |
Point-in-Time database | 🔨 Released on Mar 10, 2022 |
Arctic Provider Backend & Orderbook data example | 🔨 Released on Jan 17, 2022 |
Meta-Learning-based framework & DDG-DA | 📈 🔨 Released on Jan 10, 2022 |
Planning-based portfolio optimization | 🔨 Released on Dec 28, 2021 |
Release Qlib v0.8.0 | Released on Dec 8, 2021 |
ADD model | 📈 Released on Nov 22, 2021 |
ADARNN model | 📈 Released on Nov 14, 2021 |
TCN model | 📈 Released on Nov 4, 2021 |
Nested Decision Framework | 🔨 Released on Oct 1, 2021. Example and Doc |
Temporal Routing Adaptor (TRA) | 📈 Released on July 30, 2021 |
Transformer & Localformer | 📈 Released on July 22, 2021 |
Release Qlib v0.7.0 | Released on July 12, 2021 |
TCTS Model | 📈 Released on July 1, 2021 |
Online serving and automatic model rolling | 🔨 Released on May 17, 2021 |
DoubleEnsemble Model | 📈 Released on Mar 2, 2021 |
High-frequency data processing example | 🔨 Released on Feb 5, 2021 |
High-frequency trading example | 📈 Part of code released on Jan 28, 2021 |
High-frequency data(1min) | 🍚 Released on Jan 27, 2021 |
Tabnet Model | 📈 Released on Jan 22, 2021 |
Features released before 2021 are not listed here.
Qlib is an open-source, AI-oriented quantitative investment platform that aims to realize the potential, empower research, and create value using AI technologies in quantitative investment, from exploring ideas to implementing productions. Qlib supports diverse machine learning modeling paradigms, including supervised learning, market dynamics modeling, and reinforcement learning.
An increasing number of SOTA Quant research works/papers in diverse paradigms are being released in Qlib to collaboratively solve key challenges in quantitative investment. For example, 1) using supervised learning to mine the market's complex non-linear patterns from rich and heterogeneous financial data, 2) modeling the dynamic nature of the financial market using adaptive concept drift technology, and 3) using reinforcement learning to model continuous investment decisions and assist investors in optimizing their trading strategies.
It contains the full ML pipeline of data processing, model training, back-testing; and covers the entire chain of quantitative investment: alpha seeking, risk modeling, portfolio optimization, and order execution. For more details, please refer to our paper "Qlib: An AI-oriented Quantitative Investment Platform".
Frameworks, Tutorial, Data & DevOps | Main Challenges & Solutions in Quant Research |
---|---|
|
|
New features under development(order by estimated release time). Your feedbacks about the features are very important.
The high-level framework of Qlib can be found above(users can find the detailed framework of Qlib's design when getting into nitty gritty). The components are designed as loose-coupled modules, and each component could be used stand-alone.
Qlib provides a strong infrastructure to support Quant research. Data is always an important part. A strong learning framework is designed to support diverse learning paradigms (e.g. reinforcement learning, supervised learning) and patterns at different levels(e.g. market dynamic modeling). By modeling the market, trading strategies will generate trade decisions that will be executed. Multiple trading strategies and executors in different levels or granularities can be nested to be optimized and run together. At last, a comprehensive analysis will be provided and the model can be served online in a low cost.
This quick start guide tries to demonstrate
- It's very easy to build a complete Quant research workflow and try your ideas with Qlib.
- Though with public data and simple models, machine learning technologies work very well in practical Quant investment.
Here is a quick demo shows how to install Qlib
, and run LightGBM with qrun
. But, please make sure you have already prepared the data following the instruction.
This table demonstrates the supported Python version of Qlib
:
install with pip | install from source | plot | |
---|---|---|---|
Python 3.7 | ✔️ | ✔️ | ✔️ |
Python 3.8 | ✔️ | ✔️ | ✔️ |
Python 3.9 | ❌ | ✔️ | ❌ |
Note:
- Conda is suggested for managing your Python environment. In some cases, using Python outside of a
conda
environment may result in missing header files, causing the installation failure of certain packages. - Please pay attention that installing cython in Python 3.6 will raise some error when installing
Qlib
from source. If users use Python 3.6 on their machines, it is recommended to upgrade Python to version 3.7 or useconda
's Python to installQlib
from source. - For Python 3.9,
Qlib
supports running workflows such as training models, doing backtest and plot most of the related figures (those included in notebook). However, plotting for the model performance is not supported for now and we will fix this when the dependent packages are upgraded in the future. Qlib
Requirestables
package,hdf5
in tables does not support python3.9.
Users can easily install Qlib
by pip according to the following command.
pip install pyqlib
Note: pip will install the latest stable qlib. However, the main branch of qlib is in active development. If you want to test the latest scripts or functions in the main branch. Please install qlib with the methods below.
Also, users can install the latest dev version Qlib
by the source code according to the following steps:
-
Before installing
Qlib
from source, users need to install some dependencies:pip install numpy pip install --upgrade cython
-
Clone the repository and install
Qlib
as follows.git clone https://github.com/microsoft/qlib.git && cd qlib pip install . # `pip install -e .[dev]` is recommended for development. check details in docs/developer/code_standard_and_dev_guide.rst
Note: You can install Qlib with
python setup.py install
as well. But it is not the recommended approach. It will skippip
and cause obscure problems. For example, only the commandpip install .
can overwrite the stable version installed bypip install pyqlib
, while the commandpython setup.py install
can't.
Tips: If you fail to install Qlib
or run the examples in your environment, comparing your steps and the CI workflow may help you find the problem.
Tips for Mac: If you are using Mac with M1, you might encounter issues in building the wheel for LightGBM, which is due to missing dependencies from OpenMP. To solve the problem, install openmp first with brew install libomp
and then run pip install .
to build it successfully.
❗ Due to more restrict data security policy. The offical dataset is disabled temporarily. You can try this data source contributed by the community. Here is an example to download the data updated on 20240809.
wget https://github.com/chenditc/investment_data/releases/download/2024-08-09/qlib_bin.tar.gz
mkdir -p ~/.qlib/qlib_data/cn_data
tar -zxvf qlib_bin.tar.gz -C ~/.qlib/qlib_data/cn_data --strip-components=1
rm -f qlib_bin.tar.gz
The official dataset below will resume in short future.
Load and prepare data by running the following code:
# get 1d data
python -m qlib.run.get_data qlib_data --target_dir ~/.qlib/qlib_data/cn_data --region cn
# get 1min data
python -m qlib.run.get_data qlib_data --target_dir ~/.qlib/qlib_data/cn_data_1min --region cn --interval 1min
# get 1d data
python scripts/get_data.py qlib_data --target_dir ~/.qlib/qlib_data/cn_data --region cn
# get 1min data
python scripts/get_data.py qlib_data --target_dir ~/.qlib/qlib_data/cn_data_1min --region cn --interval 1min
This dataset is created by public data collected by crawler scripts, which have been released in the same repository. Users could create the same dataset with it. Description of dataset
Please pay ATTENTION that the data is collected from Yahoo Finance, and the data might not be perfect. We recommend users to prepare their own data if they have a high-quality dataset. For more information, users can refer to the related document.
This step is Optional if users only want to try their models and strategies on history data.
It is recommended that users update the data manually once (--trading_date 2021-05-25) and then set it to update automatically.
NOTE: Users can't incrementally update data based on the offline data provided by Qlib(some fields are removed to reduce the data size). Users should use yahoo collector to download Yahoo data from scratch and then incrementally update it.
For more information, please refer to: yahoo collector
-
Automatic update of data to the "qlib" directory each trading day(Linux)
-
use crontab:
crontab -e
-
set up timed tasks:
* * * * 1-5 python <script path> update_data_to_bin --qlib_data_1d_dir <user data dir>
- script path: scripts/data_collector/yahoo/collector.py
-
-
Manual update of data
python scripts/data_collector/yahoo/collector.py update_data_to_bin --qlib_data_1d_dir <user data dir> --trading_date <start date> --end_date <end date>
- trading_date: start of trading day
- end_date: end of trading day(not included)
- Pulling a docker image from a docker hub repository
docker pull pyqlib/qlib_image_stable:stable
- Start a new Docker container
docker run -it --name <container name> -v <Mounted local directory>:/app qlib_image_stable
- At this point you are in the docker environment and can run the qlib scripts. An example:
>>> python scripts/get_data.py qlib_data --name qlib_data_simple --target_dir ~/.qlib/qlib_data/cn_data --interval 1d --region cn >>> python qlib/workflow/cli.py examples/benchmarks/LightGBM/workflow_config_lightgbm_Alpha158.yaml
- Exit the container
>>> exit
- Restart the container
docker start -i -a <container name>
- Stop the container
docker stop <container name>
- Delete the container
docker rm <container name>
- If you want to know more information, please refer to the documentation.
Qlib provides a tool named qrun
to run the whole workflow automatically (including building dataset, training models, backtest and evaluation). You can start an auto quant research workflow and have a graphical reports analysis according to the following steps:
-
Quant Research Workflow: Run
qrun
with lightgbm workflow config (workflow_config_lightgbm_Alpha158.yaml as following.cd examples # Avoid running program under the directory contains `qlib` qrun benchmarks/LightGBM/workflow_config_lightgbm_Alpha158.yaml
If users want to use
qrun
under debug mode, please use the following command:python -m pdb qlib/workflow/cli.py examples/benchmarks/LightGBM/workflow_config_lightgbm_Alpha158.yaml
The result of
qrun
is as follows, please refer to Intraday Trading for more details about the result.'The following are analysis results of the excess return without cost.' risk mean 0.000708 std 0.005626 annualized_return 0.178316 information_ratio 1.996555 max_drawdown -0.081806 'The following are analysis results of the excess return with cost.' risk mean 0.000512 std 0.005626 annualized_return 0.128982 information_ratio 1.444287 max_drawdown -0.091078
Here are detailed documents for
qrun
and workflow. -
Graphical Reports Analysis: Run
examples/workflow_by_code.ipynb
withjupyter notebook
to get graphical reports-
Forecasting signal (model prediction) analysis
-
Portfolio analysis
-
Explanation of above results
-
The automatic workflow may not suit the research workflow of all Quant researchers. To support a flexible Quant research workflow, Qlib also provides a modularized interface to allow researchers to build their own workflow by code. Here is a demo for customized Quant research workflow by code.
Quant investment is a very unique scenario with lots of key challenges to be solved. Currently, Qlib provides some solutions for several of them.
Accurate forecasting of the stock price trend is a very important part to construct profitable portfolios. However, huge amount of data with various formats in the financial market which make it challenging to build forecasting models.
An increasing number of SOTA Quant research works/papers, which focus on building forecasting models to mine valuable signals/patterns in complex financial data, are released in Qlib
Here is a list of models built on Qlib
.
- GBDT based on XGBoost (Tianqi Chen, et al. KDD 2016)
- GBDT based on LightGBM (Guolin Ke, et al. NIPS 2017)
- GBDT based on Catboost (Liudmila Prokhorenkova, et al. NIPS 2018)
- MLP based on pytorch
- LSTM based on pytorch (Sepp Hochreiter, et al. Neural computation 1997)
- GRU based on pytorch (Kyunghyun Cho, et al. 2014)
- ALSTM based on pytorch (Yao Qin, et al. IJCAI 2017)
- GATs based on pytorch (Petar Velickovic, et al. 2017)
- SFM based on pytorch (Liheng Zhang, et al. KDD 2017)
- TFT based on tensorflow (Bryan Lim, et al. International Journal of Forecasting 2019)
- TabNet based on pytorch (Sercan O. Arik, et al. AAAI 2019)
- DoubleEnsemble based on LightGBM (Chuheng Zhang, et al. ICDM 2020)
- TCTS based on pytorch (Xueqing Wu, et al. ICML 2021)
- Transformer based on pytorch (Ashish Vaswani, et al. NeurIPS 2017)
- Localformer based on pytorch (Juyong Jiang, et al.)
- TRA based on pytorch (Hengxu, Dong, et al. KDD 2021)
- TCN based on pytorch (Shaojie Bai, et al. 2018)
- ADARNN based on pytorch (YunTao Du, et al. 2021)
- ADD based on pytorch (Hongshun Tang, et al.2020)
- IGMTF based on pytorch (Wentao Xu, et al.2021)
- HIST based on pytorch (Wentao Xu, et al.2021)
- KRNN based on pytorch
- Sandwich based on pytorch
Your PR of new Quant models is highly welcomed.
The performance of each model on the Alpha158
and Alpha360
datasets can be found here.
All the models listed above are runnable with Qlib
. Users can find the config files we provide and some details about the model through the benchmarks folder. More information can be retrieved at the model files listed above.
Qlib
provides three different ways to run a single model, users can pick the one that fits their cases best:
-
Users can use the tool
qrun
mentioned above to run a model's workflow based from a config file. -
Users can create a
workflow_by_code
python script based on the one listed in theexamples
folder. -
Users can use the script
run_all_model.py
listed in theexamples
folder to run a model. Here is an example of the specific shell command to be used:python run_all_model.py run --models=lightgbm
, where the--models
arguments can take any number of models listed above(the available models can be found in benchmarks). For more use cases, please refer to the file's docstrings.- NOTE: Each baseline has different environment dependencies, please make sure that your python version aligns with the requirements(e.g. TFT only supports Python 3.6~3.7 due to the limitation of
tensorflow==1.15.0
)
- NOTE: Each baseline has different environment dependencies, please make sure that your python version aligns with the requirements(e.g. TFT only supports Python 3.6~3.7 due to the limitation of
Qlib
also provides a script run_all_model.py
which can run multiple models for several iterations. (Note: the script only support Linux for now. Other OS will be supported in the future. Besides, it doesn't support parallel running the same model for multiple times as well, and this will be fixed in the future development too.)
The script will create a unique virtual environment for each model, and delete the environments after training. Thus, only experiment results such as IC
and backtest
results will be generated and stored.
Here is an example of running all the models for 10 iterations:
python run_all_model.py run 10
It also provides the API to run specific models at once. For more use cases, please refer to the file's docstrings.
Due to the non-stationary nature of the environment of the financial market, the data distribution may change in different periods, which makes the performance of models build on training data decays in the future test data. So adapting the forecasting models/strategies to market dynamics is very important to the model/strategies' performance.
Here is a list of solutions built on Qlib
.
Qlib now supports reinforcement learning, a feature designed to model continuous investment decisions. This functionality assists investors in optimizing their trading strategies by learning from interactions with the environment to maximize some notion of cumulative reward.
Here is a list of solutions built on Qlib
categorized by scenarios.
Here is the introduction of this scenario. All the methods below are compared here.
- TWAP
- PPO: "An End-to-End Optimal Trade Execution Framework based on Proximal Policy Optimization", IJCAL 2020
- OPDS: "Universal Trading for Order Execution with Oracle Policy Distillation", AAAI 2021
Dataset plays a very important role in Quant. Here is a list of the datasets built on Qlib
:
Dataset | US Market | China Market |
---|---|---|
Alpha360 | √ | √ |
Alpha158 | √ | √ |
Here is a tutorial to build dataset with Qlib
.
Your PR to build new Quant dataset is highly welcomed.
Qlib is high customizable and a lot of its components are learnable.
The learnable components are instances of Forecast Model
and Trading Agent
. They are learned based on the Learning Framework
layer and then applied to multiple scenarios in Workflow
layer.
The learning framework leverages the Workflow
layer as well(e.g. sharing Information Extractor
, creating environments based on Execution Env
).
Based on learning paradigms, they can be categorized into reinforcement learning and supervised learning.
- For supervised learning, the detailed docs can be found here.
- For reinforcement learning, the detailed docs can be found here. Qlib's RL learning framework leverages
Execution Env
inWorkflow
layer to create environments. It's worth noting thatNestedExecutor
is supported as well. This empowers users to optimize different level of strategies/models/agents together (e.g. optimizing an order execution strategy for a specific portfolio management strategy).
If you want to have a quick glance at the most frequently used components of qlib, you can try notebooks here.
The detailed documents are organized in docs. Sphinx and the readthedocs theme is required to build the documentation in html formats.
cd docs/
conda install sphinx sphinx_rtd_theme -y
# Otherwise, you can install them with pip
# pip install sphinx sphinx_rtd_theme
make html
You can also view the latest document online directly.
Qlib is in active and continuing development. Our plan is in the roadmap, which is managed as a github project.
The data server of Qlib can either deployed as Offline
mode or Online
mode. The default mode is offline mode.
Under Offline
mode, the data will be deployed locally.
Under Online
mode, the data will be deployed as a shared data service. The data and their cache will be shared by all the clients. The data retrieval performance is expected to be improved due to a higher rate of cache hits. It will consume less disk space, too. The documents of the online mode can be found in Qlib-Server. The online mode can be deployed automatically with Azure CLI based scripts. The source code of online data server can be found in Qlib-Server repository.
The performance of data processing is important to data-driven methods like AI technologies. As an AI-oriented platform, Qlib provides a solution for data storage and data processing. To demonstrate the performance of Qlib data server, we compare it with several other data storage solutions.
We evaluate the performance of several storage solutions by finishing the same task, which creates a dataset (14 features/factors) from the basic OHLCV daily data of a stock market (800 stocks each day from 2007 to 2020). The task involves data queries and processing.
HDF5 | MySQL | MongoDB | InfluxDB | Qlib -E -D | Qlib +E -D | Qlib +E +D | |
---|---|---|---|---|---|---|---|
Total (1CPU) (seconds) | 184.4±3.7 | 365.3±7.5 | 253.6±6.7 | 368.2±3.6 | 147.0±8.8 | 47.6±1.0 | 7.4±0.3 |
Total (64CPU) (seconds) | 8.8±0.6 | 4.2±0.2 |
+(-)E
indicates with (out)ExpressionCache
+(-)D
indicates with (out)DatasetCache
Most general-purpose databases take too much time to load data. After looking into the underlying implementation, we find that data go through too many layers of interfaces and unnecessary format transformations in general-purpose database solutions. Such overheads greatly slow down the data loading process. Qlib data are stored in a compact format, which is efficient to be combined into arrays for scientific computation.
- If you have any issues, please create issue here or send messages in gitter.
- If you want to make contributions to
Qlib
, please create pull requests. - For other reasons, you are welcome to contact us by email(qlib@microsoft.com).
- We are recruiting new members(both FTEs and interns), your resumes are welcome!
Join IM discussion groups:
Gitter |
---|
We appreciate all contributions and thank all the contributors!
Before we released Qlib as an open-source project on Github in Sep 2020, Qlib is an internal project in our group. Unfortunately, the internal commit history is not kept. A lot of members in our group have also contributed a lot to Qlib, which includes Ruihua Wang, Yinda Zhang, Haisu Yu, Shuyu Wang, Bochen Pang, and Dong Zhou. Especially thanks to Dong Zhou due to his initial version of Qlib.
This project welcomes contributions and suggestions.
Here are some
code standards and development guidance for submiting a pull request.
Making contributions is not a hard thing. Solving an issue(maybe just answering a question raised in issues list or gitter), fixing/issuing a bug, improving the documents and even fixing a typo are important contributions to Qlib.
For example, if you want to contribute to Qlib's document/code, you can follow the steps in the figure below.
If you don't know how to start to contribute, you can refer to the following examples.
Type | Examples |
---|---|
Solving issues | Answer a question; issuing or fixing a bug |
Docs | Improve docs quality ; Fix a typo |
Feature | Implement a requested feature like this; Refactor interfaces |
Dataset | Add a dataset |
Models | Implement a new model, some instructions to contribute models |
Good first issues are labelled to indicate that they are easy to start your contributions.
You can find some impefect implementation in Qlib by rg 'TODO|FIXME' qlib
If you would like to become one of Qlib's maintainers to contribute more (e.g. help merge PR, triage issues), please contact us by email(qlib@microsoft.com). We are glad to help to upgrade your permission.
Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the right to use your contribution. For details, visit https://cla.opensource.microsoft.com.
When you submit a pull request, a CLA bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA.
This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact opencode@microsoft.com with any additional questions or comments.