svip-lab / impersonator
- вторник, 1 октября 2019 г. в 00:23:42
Python
PyTorch implementation of our ICCV 2019 paper: Liquid Warping GAN: A Unified Framework for Human Motion Imitation, Appearance Transfer and Novel View Synthesis
PyTorch implementation of our ICCV 2019 paper:
Liquid Warping GAN: A Unified Framework for Human Motion Imitation, Appearance Transfer and Novel View Synthesis
Please clone the newest codes.
Python 3.6+, Pytorch 1.2, torchvision 0.4, cuda10.0, at least 8GB GPU memory and other requirements.
pip install -r requirements.txtcd thirdparty/neural_renderer
python setup.py installDownload pretrains.zip from OneDrive or
BaiduPan and then move the pretrains.zip to
the assets directory and unzip this file.
Download checkpoints.zip from OneDrive or
BaiduPan and then
unzip the checkpoints.zip and move them to outputs directory.
Download samples.zip from OneDrive or
BaiduPan, and then
unzip the samples.zip and move them to assets directory.
If you want to get the results of the demo shown in webpage, you can run the following scripts.
The results are saved in ./outputs/results/demos
Demo of Motion Imitation
python demo_imitator.py --gpu_ids 1Demo of Appearance Transfer
python demo_swap.py --gpu_ids 1Demo of Novel View Synthesis
python demo_view.py --gpu_ids 1If you want to test other inputs (source image and reference images), here are some examples.
Please replace the --ip YOUR_IP and --port YOUR_PORT for
Visdom visualization.
Motion Imitation
python run_imitator.py --gpu_ids 0 --model imitator --output_dir ./outputs/results/ \
--src_path ./assets/src_imgs/imper_A_Pose/009_5_1_000.jpg \
--tgt_path ./assets/samples/refs/iPER/024_8_2 \
--bg_ks 13 --ft_ks 3 \
--has_detector --post_tune \
--save_res --ip YOUR_IP --port YOUR_PORTpython run_imitator.py --gpu_ids 0 --model imitator --output_dir ./outputs/results/ \
--src_path ./assets/src_imgs/fashion_woman/Sweaters-id_0000088807_4_full.jpg \
--tgt_path ./assets/samples/refs/iPER/024_8_2 \
--bg_ks 25 --ft_ks 3 \
--has_detector --post_tune \
--save_res --ip YOUR_IP --port YOUR_PORTpython run_imitator.py --gpu_ids 0 --model imitator --output_dir ./outputs/results/ \
--src_path ./assets/src_imgs/internet/men1_256.jpg \
--tgt_path ./assets/samples/refs/iPER/024_8_2 \
--bg_ks 7 --ft_ks 3 \
--has_detector --post_tune --front_warp \
--save_res --ip YOUR_IP --port YOUR_PORTAppearance Transfer
An example that source image from iPER and reference image from DeepFashion dataset.
python run_swap.py --gpu_ids 0 --model imitator --output_dir ./outputs/results/ \
--src_path ./assets/src_imgs/imper_A_Pose/024_8_2_0000.jpg \
--tgt_path ./assets/src_imgs/fashion_man/Sweatshirts_Hoodies-id_0000680701_4_full.jpg \
--bg_ks 13 --ft_ks 3 \
--has_detector --post_tune --front_warp --swap_part body \
--save_res --ip http://10.10.10.100 --port 31102Novel View Synthesis
python run_view.py --gpu_ids 0 --model viewer --output_dir ./outputs/results/ \
--src_path ./assets/src_imgs/internet/men1_256.jpg \
--bg_ks 13 --ft_ks 3 \
--has_detector --post_tune --front_warp --bg_replace \
--save_res --ip http://10.10.10.100 --port 31102The details of each running scripts are shown in runDetails.md.
The details are shown in train.md [TODO].
@InProceedings{lwb2019,
title={Liquid Warping GAN: A Unified Framework for Human Motion Imitation, Appearance Transfer and Novel View Synthesis},
author={Wen Liu and Zhixin Piao, Min Jie, Wenhan Luo, Lin Ma and and Shenghua Gao},
booktitle={The IEEE International Conference on Computer Vision (ICCV)},
year={2019}
}