Interactive Image Translation with pix2pix-tensorflow
The pix2pix model works by training on pairs of images such as building facade labels to building facades, and then attempts to generate the corresponding output image from any input image you give it. The idea is straight from the pix2pix paper
, which is a good read.
Trained on a database of building facades to labeled building facades. It doesn't seem sure about what to do with a large empty area, but if you put enough windows on there it often has reasonable results. Draw "wall" color rectangles to erase things.
I didn't have the names of the different parts of building facades so I just guessed what they were called.
Trained on about 2k stock cat photos and edges automatically generated from those photos. Generates cat-colored objects, some with nightmare faces. The best one I've seen yet was a cat-beholder
Some of the pictures look especially creepy, I think because it's easier to notice when an animal looks wrong, especially around the eyes. The auto-detected edges are not very good and in many cases didn't detect the cat's eyes, making it a bit worse for training the image translation model.
Trained on a database of ~50k shoe pictures
collected from Zappos along with edges generated from those pictures automatically. If you're really good at drawing the edges of shoes, you can try to produce some new designs. Keep in mind it's trained on real objects, so if you can draw more 3D things, it seems to work better.
Similar to the previous one, trained on a database of ~137k handbag pictures collected from Amazon and automatically generated edges from those pictures. If you draw a shoe here instead of a handbag, you get a very oddly textured shoe.
The models were trained and exported with the pix2pix.py
script from pix2pix-tensorflow
a hosted Tensorflow service run by Google.
The pre-trained models are available in the Datasets section
on GitHub. All the ones released alongside the original pix2pix implementation should be available. The models can be exported from the pre-trained ones using the pix2pix.py
script, and the exported models are linked from the server README