junyanz / iGAN
- пятница, 23 сентября 2016 г. в 03:17:05
Python
iGAN: Interactive Image Generation powered by GAN
[Project] [Youtube]
Contact: Jun-Yan Zhu (junyanz at eecs dot berkeley dot edu)
iGAN (aka. interactive GAN) is the authors' implementation of interactive image generation interface described in:
"Generative Visual Manipulation on the Natural Image Manifold"
Jun-Yan Zhu, Philipp Krähenbühl, Eli Shechtman, Alexei A. Efros
In European Conference on Computer Vision (ECCV) 2016
Given a few user strokes, our system could produce photo-realistic samples that best satisfy the user edits at real-time. Our system is based on deep generative models such as Generative Adversarial Networks (GAN) and DCGAN. The system serves the following two purposes:
We are working on supporting more generative models (e.g. variational autoencoder) and more deep learning frameworks (e.g. Tensorflow). Please cite our paper if you find this code useful in your research.
Requirements).git clone https://github.com/junyanz/iGAN
cd iGANPre-trained models).THEANO_FLAGS).THEANO_FLAGS='device=gpu0, floatX=float32, nvcc.fastmath=True' python iGAN_main.py --model_name outdoor_64 --model_type dcgan_theanoThe code is written in Python2 and requires the following 3rd party libraries:
sudo apt-get install python-opencvsudo pip install -U scikit-imagesudo pip install --upgrade --no-deps git+git://github.com/Theano/Theano.gitsudo apt-cache search pyqt
sudo apt-get install python-qt4sudo pip install qdarkstylesudo pip install dominateDownload the theano DCGAN model (e.g. outdoor_64.dcgan_theano). Before using our system, please check out the random real images vs. DCGAN generated samples to see which kind of images that a model can produce.
bash ./models/scripts/download_dcgan_model.sh outdoor_64.dcgan_theanoWe provide a simple script to generate samples from a pre-trained DCGAN model. You can run this script to test if Theano, CUDA, cuDNN are configured properly before running our interface.
THEANO_FLAGS='device=gpu0, floatX=float32, nvcc.fastmath=True' python generate_samples.py --model_name outdoor_64 --model_type dcgan_theano --output_image outdoor_64_dcgan.pngSee [Youtube] at 2:18s for the interactive image generation demos.
Edits button to display/hide user edits.Coloring Brush for changing the color of a specific region; Sketching brush for outlining the shape. Warping brush for modifying the shape more explicitly.Play: play the interpolation sequence; Fix: use the current result as additional constrains for further editing Restart: restart the system; Save: save the result to a webpage. Edits: Check the box if you would like to show the edits on top of the generated image.Coloring Brush: right click to select a color; hold left click to paint; scroll the mouse wheel to adjust the width of the brush.Sketching Brush: hold left click to sketch the shape.Warping Brush: We recommend you first use coloring and sketching before the warping brush. Right click to select a square region; hold left click to drag the region; scroll the mouse wheel to adjust the size of the square region.Play, F for Fix, R for Restart; S for Save; E for Edits; Q for quiting the program.Type python iGAN_main.py --help for a complete list of the arguments. Here we discuss some important arguments:
--model_name: the name of the model (e.g. outdoor_64, shoes_64, etc.)--model_type: currently only supports dcgan_theano.--model_file: the file that stores the generative model; If not specified, model_file='./models/%s.%s' % (model_name, model_type)--top_k: the number of the candidate results being displayedgui_draw.py) => constrained optimization python class (constrained_opt_theano.py) => deep generative model python class (e.g. dcgan_theano.py). To incorporate your own generative model, you need to create a new python class (e.g. vae_theano.py) under model_def folder with the same interface of dcgan_theano.py, and specify --model_type vae_theano in the command line.constrained_opt_tensorflow.py) now. Once the code is released, you can create your own tensorflow model class (e.g. dcgan_tensorflow.py) under model_def folder.@inproceedings{zhu2016generative,
title={Generative Visual Manipulation on the Natural Image Manifold},
author={Zhu, Jun-Yan and Kr{\"a}henb{\"u}hl, Philipp and Shechtman, Eli and Efros, Alexei A.},
booktitle={Proceedings of European Conference on Computer Vision (ECCV)},
year={2016}
}
If you love cats, and love reading cool graphics, vision, and learning papers, please check out our Cat Paper Collection:
[Github] [Webpage]