matthiasplappert / keras-rl
- среда, 3 августа 2016 г. в 03:14:18
Python
Deep Reinforcement Learning for Keras.
keras-rl
implements some state-of-the art deep reinforcement learning algorithms in Python and seamlessly integrates with the deep learning library Keras. Just like Keras, it works with either Theano or TensorFlow, which means that you can train your algorithm efficiently either on CPU or GPU.
Furthermore, keras-rl
works with OpenAI Gym out of the box. This means that evaluating and playing around with different algorithms is easy.
Of course you can extend keras-rl
according to your own needs. You can use built-in Keras callbacks and metrics or define your own.
Even more so, it is easy to implement your own environments and even algorithms by simply extending some simple abstract classes.
In a nutshell: keras-rl
makes it really easy to run state-of-the-art deep reinforcement learning algorithms, uses Keras and thus Theano and TensorFlow and was built with OpenAI Gym in mind.
As of today, the following algorithms have been implemented:
I'm currently working on the following algorithms, which can be found on the experimental
branch:
Notice that these are only experimental and might currently not even run.
Installing keras-rl
is easy. Just run the following commands and you should be good to go:
git clone https://github.com/matthiasplappert/keras-rl.git
cd keras-rl
python setup.py install
This will install keras-rl
and all necessary dependencies.
If you want to run the examples, you'll also have to install gym
by OpenAI.
Please refer to their installation instructions.
It's quite easy and works nicely on Ubuntu and Mac OS X.
You'll also need the h5py
package to load and save model weights, which can be installed using
the following command:
pip install h5py
Once you have installed everything, you can try out a simple example:
python examples/dqn_cartpole.py
This is a very simple example and it should converge relatively quickly, so it's a great way to get started! It also visualizes the game during training, so you can watch it learn. How cool is that?
Unfortunately, he documentation of keras-rl
is currently almost non-existent.
However, you can find a couple of more examples that illustrate the usage of both DQN (for tasks with discrete actions) as well as for DDPG (for tasks with continuous actions).
While these examples are not replacement for a proper documentation, they should be enough to get started quickly and to see the magic of reinforcement learning yourself.
I also encourage you to play around with other environments (OpenAI Gym has plenty) and maybe even try to find better hyperparameters for the existing ones.
If you have questions or problems, please file an issue or, even better, fix the problem yourself and submit a pull request!
Training times can be very long depending on the complexity of the environment.
This repo provides some weights that were obtained by running (at least some) of the examples that are included in keras-rl
.
You can load the weights using the load_weights
method on the respective agents.
That's it. However, if you want to run the examples, you'll also need the following dependencies:
keras-rl
also works with TensorFlow. To find out how to use TensorFlow instead of Theano, please refer to the Keras documentation.
If you use keras-rl
in your research, you can cite it as follows:
@misc{plappert2016kerasrl,
author = {Matthias Plappert},
title = {keras-rl},
year = {2016},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/matthiasplappert/keras-rl}},
}
The foundation for this library was developed during my work at the High Performance Humanoid Technologies (H²T) lab at the Karlsruhe Institute of Technologie (KIT). It has since been adapted to become a general-purpose library.