ibab / tensorflow-wavenet
- четверг, 15 сентября 2016 г. в 03:13:21
Python
A TensorFlow implementation of DeepMind's WaveNet paper
This is a TensorFlow implementation of the WaveNet generative neural network architecture for audio generation.
The WaveNet architecture directly generates a raw audio waveform, and shows excellent results in TTS and general audio generation (see the DeepMind blog post and paper for examples).
The network is a model of the conditional probability to generate the next sample in the audio waveform, given all previous samples and possibly additional parameters. It is constructed from a stack of causal dilated layers, each of which is a dilated convolution (convolution with holes), which only accesses the current and past audio samples.
The network is implemented in the file wavenet.py
.
TODO:
TensorFlow needs to be installed before running the training script.
TensorFlow 0.10 and the current master
version are supported.
In addition, the ffmpeg
binary needs to be available on the command line`.
It is needed by the TensorFlow ffmpeg contrib package that is used to decode the audio files.
The VCTK corpus is currently used.
In order to train the network, you need to download the corpus and unpack it in the same directory as the train.py
script.
Then, execute
python train.py
to train the network.
You can see documentation on the settings by by running
python train.py --help
Disclaimer: This repository is not affiliated with DeepMind or Google in any way.