GANs, image processing, generative art, automating content creation

In 2014, Ian Goodfellow introduced a new paradigm in unsupervised learning with Generative Adversarial Networks (GANs).

This clever technique introduces 2 neural networks, the generator and the discriminator. The generator learns a mapping from a latent space to your dataset while the discriminator learns to differentiate real images from those generated by the first network. As training progresses, we attempt to build a generator that effectively fools the discriminator, in the hopes that humans will also find it difficult to tell if the data was generated by machine.

In this way, we can apply the encoder-decoder framework to formulate a supervised learning problem which helps to improve a generative model. The results of this technique can be quite impressive. Despite this, GANs have not found such broad application as other deep learning techniques.

Perhaps the most natural application is using the generative model to create new multimedia for artistic appreciation. Indeed, browsing #cycleGAN on social media, you will find examples of spectacular transfigurations.

I try my hand at generating a few for the purpose of social media marketing of a home automation device we've been working on since my last post.

This implementation of GANs performs image-to-image translation with convolutional layers to learn a mapping from image distribution A to image distribution B. CycleGANs differ from vanilla GANs in that they also learn the inverse mapping to stabilize training by adding a 'cycle consistency loss' to deal with an otherwise under-constrained relationship.

With this code, we simply set up the test/train/A/B directory structure for the two data distributions of interest. Reviewing the repo's README, you'll get a sense for the kinds of mappings these algorithms can learn.

Because Kindbot is specialized to cannabis, I used cannabis related art to brand the product. In particular, I trained a number of models to transfigure images of the plant to other thematic images like faces of industry leaders, natural scenes, and popular images like latte art.

I start by getting data with Google Images which can be simplified by using the Firefox browser plugin: Google Image Downloader. This will usually produce a few hundred images to train on but feel free to experiment with search terms and other browsers to enlarge the image corpus.

Search terms like: latte art, crags, home gardens, icicles, Snoop Dog, or Steve Deangelo were collected, along with cannabis flower/leaf images.

After creating the data directories, I simply run the train script for something like 400 epochs. Using this pytorch repo, you can also visualize training and inspect examples by running a local visdom server.

While the model trains, you typically find that training samples quickly pick up global patterns such as the color palette of you training distributions. As training progresses, the realism of the transfiguration can be impressive, even for small corpora. However, their are limitations and choosing suitable corpus pairs is where the art in this technique lies.

When you have achieved the desired effect in training, you can run the repo's test script to generate transfigurations for each image in the test folders. The script comes with some flags to help you chose A2B or B2A for the images generated.

Here, I have taken the real A and fake B, which the model produced, and arranged a gif with imagemagick: convert -loop 0 -delay 10 -morph 20 imageA.png imageB.png A2B.gif

Coffee and Cannabis is a popular combination and 'latte2leaf' takes advantage of that to generate a buzz for kindbot.

That was part of our 'ocean grown' campaign. Below we have the 'field of dreams' campaign which transfigures backyard gardens into cannabis fields.
Next, we featured a couple transfigurations for our 'cannabis legends' campaign with examples like snoop2kush.
Excited by the quality of the effect, I thought about how I might be able to apply similar techniques to improve Kindbot's features.

Growers must consider lighting options based on their preference for output, color temperature, and core technology. Often, efficient lighting for plants means the plants might not appear as they would under natural sunlight.

While Kindbot has learned to analyze plants under the broad spectrum of growing conditions, humans tend to prefer natural lighting. I wanted to explore the idea of applying GANs to produce the 'full spectrum version' of an image under LEDs and HPS lights which tend to distort your grow room images with the color temperature they cast.

While building Kindbot, we also explored using 20X USB microscopes to replace the jeweler's loupe for inspecting the plants. I was introduced to the technique of focus stacking and wanted to achieve a similar effect algorithmically to bring greater resolution to the images I could take with the microscope.

Here, the repo illustrates a similar effect in the iphone2dslr model from the readme. Since the data distribution was mostly vegetative, I trained it up and tested the application of cycleGANs for superresolution.