PULSE: Self-Supervised Photo Upsampling via Latent Space Exploration of Generative Models
Code accompanying CVPR'20 paper of the same title. Paper link: https://arxiv.org/abs/2003.03808
Table of Contents
- PULSE: Self-Supervised Photo Upsampling via Latent Space Exploration of Generative Models
- Table of Contents
The main file of interest for applying PULSE is
run.py. A full list of arguments with descriptions can
be found in that file; here we describe those relevant to getting
You will need to install cmake first (required for dlib, which is used for face alignment). Currently the code only works with CUDA installed (and therefore requires an appropriate GPU) and has been tested on Linux. For the full set of required Python packages, create a Conda environment from the provided YAML, e.g.
conda create -f pulse.yml
Finally, you will need an internet connection the first time you run the code as it will automatically download the relevant pretrained model from Google Drive (if it has already been downloaded, it will use the local copy).
By default, input data for
run.py should be placed
./input/ (though this can be modified). However,
this assumes faces have already been aligned and downscaled. If you
have data that is not already in this form, place it in
realpics and run
align_face.py which will
automatically do this for you. (Again, all directories can be
changed by command line arguments if more convenient.) You will at
this stage pic a downscaling factor.
Note that if your data begins at a low resolution already,
downscaling it further will retain very little information. In this
case, you may wish to bicubically upsample (usually, to 1024x1024)
align_face.py to downscale for you.
The dataset we evaluated on was CelebA-HQ, but in our experience PULSE works with any picture of a realistic face.
Once your data is appropriately formatted, all you need to do is