jantic / DeOldify
- воскресенье, 4 ноября 2018 г. в 00:16:03
Jupyter Notebook
A Deep Learning based project for colorizing and restoring old images
Simply put, the mission of this project is to colorize and restore old images. I'll get into the details in a bit, but first let's get to the pictures! BTW – most of these source images originally came from the r/TheWayWeWere subreddit, so credit to them for finding such great photos.
Maria Anderson as the Fairy Fleur de farine and Lyubov Rabtsova as her page in the ballet “Sleeping Beauty” at the Imperial Theater, St. Petersburg, Russia, 1890.
Woman relaxing in her livingroom (1920, Sweden)
Medical Students pose with a cadaver around 1890
Interior of Miller and Shoemaker Soda Fountain, 1899
Edinburgh from the sky in the 1920s
People watching a television set for the first time at Waterloo station, London, 1936
Portsmouth Square in San Franscisco, 1851
This is a deep learning based model. More specifically, what I've done is combined the following approaches:
The beauty of this model is that it should be generally useful for all sorts of image modification, and it should do it quite well. What you're seeing above are the results of the colorization model, but that's just one component in a pipeline that I'm looking to develop here with the exact same model.
What I develop next with this model will be based on trying to solve the problem of making these old images look great, so the next item on the agenda for me is the "defade" model. I've committed initial efforts on that and it's in the early stages of training as I write this. Basically it's just training the same model to reconstruct images that augmented with ridiculous contrast/brightness adjustments, as a simulation of fading photos and photos taken with old/bad equipment. I've already seen some promissing results on that as well:
So that's the gist of this project – I'm looking to make old photos look reeeeaaally good with GANs, and more importantly, make the project useful. And yes, I'm definitely interested in doing video, but first I need to sort out how to get this model under control with memory (it's a beast). It'd be nice if the models didn't take two to three days to train on a 1080TI as well (typical of GANs, unfortunately). In the meantime though this is going to be my baby and I'll be actively updating and improving the code over the foreseable future. I'll try to make this as user-friendly as possible, but I'm sure there's going to be hiccups along the way.
Oh and I swear I'll document the code properly...eventually. Admittedly I'm one of those people who believes in "self documenting code" (LOL).
This project is built around the wonderful Fast.AI library. Unfortunately, it's the -old- version and I have yet to upgrade it to the new version. (That's definitely on the agenda.) So prereqs, in summary:
For those wanting to start transforming their own images right away: To start right away with your own images without training the model yourself (understandable)...well, you'll need me to upload pre-trained weights first. I'm working on that now. Once those are available, you'll be able to refer to them in the visualization notebooks. I'd use ColorizationVisualization.ipynb. Basically you'd replace
colorizer_path = IMAGENET.parent/('bwc_rc_gen_192.h5')
With the weight file I upload for the generator (colorizer).
Then you'd just drop whatever images in the /test_images/ folder you want to run this against and you can visualize the results inside the notebook with lines like this:
vis.plot_transformed_image("test_images/derp.jpg", netG, md.val_ds, tfms=x_tfms, sz=500)
I'd keep the size around 500px, give or take, given you're running this on a gpu with plenty of memory (11 GB GeForce 1080Ti, for example). If you have less than that, you'll have to go smaller or try running it on CPU. I actually tried the latter but for some reason it was -really- absurdly slow and I didn't take the time to investigate why that was other than to find out that the Pytorch people were recommending building from source to get a big performance boost. Yeah...I didn't want to bother at that point.
Visualizations of generated images as training progresses -can- be done in Jupyter as well – it's just a simple boolean flag here when you instantiate this visualization hook: GANVisualizationHook(TENSORBOARD_PATH, trainer, 'trainer', jupyter=True, visual_iters=100)
I prefer keeping this false and just using Tensorboard though. Trust me – you'll want it. Plus if you leave it running too long Jupyter will eat up a lot of memory with said images.
Model weight saves are also done automatically during the training runs by the GANTrainer – defaulting to saving every 1000 iterations (it's an expensive operation). They're stored in the root training folder you provide, and the name goes by the save_base_name you provide to the training schedule. Weights are saved for each training size separately.
I'd recommend navigating the code top down – the Jupyter notebooks are the place to start. I treat them just as a convenient interface to prototype and visualize – everything else goes into .py files (and therefore a proper IDE) as soon as I can find a place for them. I already have visualization examples conveniently included – just open the xVisualization notebooks to run these – they point to test images already included in the project so you can start right away (in test_images).
The "GAN Schedules" you'll see in the notebooks are probably the ugliest looking thing I've put in the code, but they're just my version of implementing progressive GAN training, suited to a Unet generator. That's all that's going on there really.
As far as pretrained weights go: I'll get them up in the next few days – I'm working on a new set now that's looking better than ever.
Generally with training, you'll start seeing good results when you get midway through size 192px (assuming you're following the progressive training examples I laid out in the notebooks). And it just gets better from there.
I'm sure I screwed up something putting this up, so please let me know if that's the case.
I'll be posting more results here on Twitter: https://twitter.com/citnaj