paarthneekhara / text-to-image
- воскресенье, 28 августа 2016 г. в 03:13:09
Python
Tensorflow implementation of text to image synthesis using thought vectors
This is an experimental tensorflow implementation of synthesizing images from captions using Skip Thought Vectors. The images are synthesized using the GAN-CLS Algorithm from the paper Generative Adversarial Text-to-Image Synthesis. This implementation is built on top of the excellent DCGAN in Tensorflow. The following is the model architecture. The blue bars represent the Skip Thought Vectors for the captions.
Image Source : Generative Adversarial Text-to-Image Synthesis Paper
Data/flowers/jpg
. Also download the captions from this link. Extract the archive, copy the text_c_10
folder and paste it in Data/flowers
.Data/skipthoughts
.Data/samples
, Data/val_samples
and Data/Models
. They will be used for sampling the generated images and saving the trained models.python data_loader.py --data_set="flowers"
Training
python train.py --data_set="flowers"
z_dim
: Noise Dimension. Default is 100.t_dim
: Text feature dimension. Default is 256.batch_size
: Batch Size. Default is 64.image_size
: Image dimension. Default is 64.gf_dim
: Number of conv in the first layer generator. Default is 64.df_dim
: Number of conv in the first layer discriminator. Default is 64.gfc_dim
: Dimension of gen untis for for fully connected layer. Default is 1024.caption_vector_length
: Length of the caption vector. Default is 1024.data_dir
: Data Directory. Default is Data/
.learning_rate
: Learning Rate. Default is 0.0002.beta1
: Momentum for adam update. Default is 0.5.epochs
: Max number of epochs. Default is 600.resume_model
: Resume training from a pretrained model path.data_set
: Data Set to train on. Default is flowers.Generating Images from Captions
Data/sample_captions.txt
. Generate the skip thought vectors for these captions using:python generate_thought_vectors.py --caption_file="Data/sample_captions.txt"
python generate_images.py --model_path=<path to the trained model> --n_images=8
n_images
specifies the number of images to be generated per caption. The generated images will be saved in Data/val_samples/
. python generate_images.py --help
for more options.
Following are the images generated by the generative model from the captions.
Data/Models
. Use this path for generating the images.