luanfujun / deep-photo-styletransfer
- понедельник, 27 марта 2017 г. в 03:12:02
Matlab
Code and data for paper "Deep Photo Style Transfer": https://arxiv.org/abs/1703.07511
Code and data for paper "Deep Photo Style Transfer"
This code is based on torch. It has been tested on Ubuntu 14.04 LTS.
Dependencies:
CUDA backend:
Download VGG-19:
sh models/download_models.sh
Compile cuda_utils.cu
(Adjust PREFIX
and NVCC_PREFIX
in makefile
for your machine):
make clean && make
To generate all results (in examples/
) using the provided scripts, simply run
run('gen_laplacian/gen_laplacian.m')
in Matlab and then
python gen_all.py
in Python. The final output will be in examples/final_results/
.
examples/
respectively. They will have the following filename form: examples/input/in<id>.png
, examples/style/tar<id>.png
and examples/segmentation/in<id>.png
, examples/segmentation/tar<id>.png
;gen_laplacian/gen_laplacian.m
in Matlab. The output matrix will have the following filename form: gen_laplacian/Input_Laplacian_3x3_1e-7_CSR<id>.mat
;th neuralstyle_seg.lua -content_image <input> -style_image <style> -content_seg <inputMask> -style_seg <styleMask> -index <id> -serial <intermediate_folder>
th deepmatting_seg.lua -content_image <input> -style_image <style> -content_seg <inputMask> -style_seg <styleMask> -index <id> -init_image <intermediate_folder/out<id>_t_1000.png> -serial <final_folder> -f_radius 15 -f_edge 0.01
Note: In the main paper we generate all comparison results using automatic scene segmenation algorithm modified from DilatedNet. Manual segmentation enables more diverse tasks hence we provide the masks in examples/segmentation/
.
Here are some results from our algorithm (from left to right are input, style and our output):
Feel free to contact me if there is any question (Fujun Luan fl356@cornell.edu).