NVIDIA / vid2vid
- понедельник, 20 августа 2018 г. в 00:15:36
Python
Pytorch implementation of our method for high-resolution (e.g. 2048x1024) photorealistic video-to-video translation.
Paper will appear on Arxiv on Aug 20.
Pytorch implementation of our method for high-resolution (e.g., 2048x1024) photorealistic video-to-video translation. It can be used for turning semantic label maps into photo-realistic videos, synthesizing people talking from edge maps, or generating human bodies from poses.
Video-to-Video Synthesis
Ting-Chun Wang1, Ming-Yu Liu1, Jun-Yan Zhu2, Guilin Liu1, Andrew Tao1, Jan Kautz1, Bryan Catanzaro1
1NVIDIA Corporation, 2MIT CSAIL
In arXiv, 2018.
pip install dominate requests
git clone https://github.com/NVIDIA/vid2vid
cd vid2vid
We include an example Cityscapes video in the datasets
folder.
First, download and compile a snapshot of the FlowNet2 repo from https://github.com/NVIDIA/flownet2-pytorch by running python scripts/download_flownet2.py
.
Please download the pre-trained Cityscapes model by:
python scripts/download_models.py
To test the model (bash ./scripts/test_2048.sh
):
#!./scripts/test_2048.sh
python test.py --name label2city_2048 --loadSize 2048 --n_scales_spatial 3 --use_instance --fg --use_single_G
The test results will be saved to a HTML file here: ./results/label2city_2048/test_latest/index.html
.
We also provide a smaller model trained with 1 GPU, which produces slightly worse performance at 1024 x 512 resolution.
python scripts/download_models_g1.py
bash ./scripts/test_1024_g1.sh
):#!./scripts/test_1024_g1.sh
python test.py --name label2city_1024_g1 --loadSize 1024 --n_scales_spatial 3 --use_instance --fg --n_downsample_G 2 --use_single_G
You can find more example scripts in the scripts
directory.
datasets
folder in the same way the example images are provided.First, download the FlowNet2 checkpoint file by running python scripts/download_models_flownet2.py
.
Training with 8 GPUs:
bash ./scripts/train_512.sh
)#!./scripts/train_512.sh
python train.py --name label2city_512 --gpu_ids 0,1,2,3,4,5,6,7 --n_gpus_gen 6 --n_frames_total 6 --use_instance --fg
bash ./scripts/train_1024.sh
):#!./scripts/train_1024.sh
python train.py --name label2city_1024 --loadSize 1024 --n_scales_spatial 2 --num_D 3 --gpu_ids 0,1,2,3,4,5,6,7 --n_gpus_gen 4 --use_instance --fg --niter_step 2 --niter_fix_global 10 --load_pretrain checkpoints/label2city_512
To view training results, please checkout intermediate results in ./checkpoints/label2city_1024/web/index.html
.
If you have TensorFlow installed, you can see TensorBoard logs in ./checkpoints/label2city_1024/logs
by adding --tf_log
to the training scripts.
Training with a single GPU:
bash ./scripts/train_256_g1.sh
)#!./scripts/train_256_g1.sh
python train.py --name label2city_256_g1 --loadSize 256 --use_instance --fg --n_downsample_G 2 --num_D 1 --max_frames_per_gpu 6 --n_frames_total 6
bash ./scripts/train_2048.sh
).
If only GPUs with 12G/16G memory are available, please use the script ./scripts/train_2048_crop.sh
, which will crop the images during training. Performance is not guaranteed with this script.--label_nc N
during both training and testing.--label_nc 0
and --input_nc N
where N is the number of input channels (The default is 3 for RGB images).scaleWidth
, which will scale the width of all training images to opt.loadSize
(1024) while keeping the aspect ratio. If you want a different setting, please change it by using the --resize_or_crop
option. For example, scaleWidth_and_crop
first resizes the image to have width opt.loadSize
and then does random cropping of size (opt.fineSize, opt.fineSize)
. crop
skips the resizing step and only performs random cropping. scaledCrop
crops the image while retraining the original aspect ratio. If you don't want any preprocessing, please specify none
, which will do nothing other than making sure the image is divisible by 32.n_gpus_gen
: the number of GPUs to use for generators (while the others are used for discriminators). We separate generators and discriminators into different GPUs since when dealing with high resolutions, even one frame cannot fit in a GPU. If the number is set to -1
, there is no separation and all GPUs are used for both generators and discriminators (only works for low-res images).n_frames_G
: the number of input frames to feed into the generator network; i.e., n_frames_G - 1
is the number of frames we look into the past. the default is 3 (conditioned on previous two frames).n_frames_D
: the number of frames to feed into the temporal discriminator. The default is 3.n_scales_spatial
: the number of scales in the spatial domain. We train from the coarsest scale and all the way to the finest scale. The default is 3.n_scales_temporal
: the number of scales for the temporal discriminator. The finest scale takes in the sequence in the original frame rate. The coarser scales subsample the frames by a factor of n_frames_D
before feeding the frames into the discriminator. For example, if n_frames_D = 3
and n_scales_temporal = 3
, the discriminator effectively sees 27 frames. The default is 3.max_frames_per_gpu
: the number of frames in one GPU during training. If your GPU memory can fit more frames, try to make this number bigger. The default is 1.max_frames_backpropagate
: the number of frames that loss backpropagates to previous frames. For example, if this number is 4, the loss on frame n will backpropagate to frame n-3. Increasing this number will slightly improve the performance, but also cause training to be less stable. The default is 1.n_frames_total
: the total number of frames in a sequence we want to train with. We gradually increase this number during training.niter_step
: for how many epochs do we double n_frames_total
. The default is 5.niter_fix_global
: if this number if not 0, only train the finest spatial scale for this number of epochs before starting to finetune all scales.batchSize
: the number of sequences to train at a time. We normally set batchSize to 1 since often, one sequence is enough to occupy all GPUs. If you want to do batchSize > 1, currently only batchSize == n_gpus_gen
is supported.no_first_img
: if not specified, the model will assume the first frame is given and synthesize the successive frames. If specified, the model will also try to synthesize the first frame instead.fg
: if specified, use the foreground-background separation model.options/train_options.py
and options/base_options.py
for all the training flags; see options/test_options.py
and options/base_options.py
for all the test flags.If you find this useful for your research, please use the following.
@article{wang2018vid2vid,
title={Video-to-Video Synthesis},
author={Ting-Chun Wang and Ming-Yu Liu and Jun-Yan Zhu and Guilin Liu and Andrew Tao and Jan Kautz and Bryan Catanzaro},
journal={arXiv},
year={2018}
}
This code borrows heavily from pytorch-CycleGAN-and-pix2pix and pix2pixHD.