jisungk / deepjazz
- вторник, 19 апреля 2016 г. в 03:13:54
Python
Deep learning driven jazz generation using Keras & Theano!
I built deepjazz in 36 hours for HackPrinceton, Spring 2016. It uses Keras & Theano, two deep learning libraries, to generate jazz music. Specifically, it builds a two-layer LSTM, learning from the given MIDI file. It uses deep learning, the AI tech that powers Google's AlphaGo and IBM's Watson, to make music -- something that's considered as deeply human.
Check out deepjazz's music on SoundCloud!
Run on CPU with command:
python generator.py [# of epochs]
Run on GPU with command:
THEANO_FLAGS=mode=FAST_RUN,device=gpu,floatX=float32 python generator.py [# of epochs]
Note: preprocess.py
must be modified to work with other MIDI files (the relevant "melody" MIDI part needs to be selected). The ability to handle this natively is a planned feature.
Ji-Sung Kim
Princeton University, Department of Computer Science
jisungk@princeton.edu
This project develops a lot of preprocessing code (with permission) from Evan Chow's jazzml. Thank you Evan! Public examples from the Keras documentation were also referenced.
Code is licensed under the Apache License 2.0
Images and other media are copyrighted (Ji-Sung Kim)