For the past few days, we've been trying to create pixel arts of our own and we thought of using Generative Adversarial Networks (GANs) to help with the process. Since the last time we worked on a project involving GANs was 3 years ago, we decided to do a quick Google search on the topic for refreshing our memory.
We were in complete awe to see all the new applications that have stemmed off of the traditional GANs (first introduced by the GANfather, Ian Goodfellow). Therefore, we thought of compiling all the new applications of GANs so that you and us both can leverage them for future use cases.

Here's a relatively large list of 14 different Generative Adversarial Networks (GANs) applications:
According to the GitHub repo, Neural Photo Editor is a simple interface for editing natural photos with generative neural networks.

Some commands of the tool are as follows:
According to the GitHub repo, Illustration GAN is a simple, clean TensorFlow implementation of Generative Adversarial Networks with a focus on modeling illustrations.

The model is based on DCGANs, but with a few important differences:
Stacked Generative Adversarial Networks (StackGAN) is a variant of GANs that can generate 256x256 photo-realistic images conditioned on text descriptions.

StackGAN uses a two-stage approach where the Stage-I GAN sketches the primitive shape and colors of the object based on the given text description, yielding Stage-I low-resolution images. The Stage-II GAN takes Stage-I results and text descriptions as inputs and generates high-resolution images with photo-realistic details.
3D Generative Adversarial Network (3D-GAN) is a type of GANs which generates 3D objects from a probabilistic space by leveraging recent advances in volumetric convolutional networks and generative adversarial nets.

Here's a video detailing the application:
Invertible Conditional GANs (IcGANs) uses a combination of an encoder with conditional GANs to re-generate real images with deterministic complex modifications.

In this paper, conditional generative adversarial network (cGAN) has been used for identity-preserving face aging

The contributions of the paper are as follows:
According to the authors, the image-to-image translation is performed by using conditional adversarial networks as a general-purpose solution to image-to-image translation problems.

These networks not only learn the mapping from an input image to output image, but also learn a loss function to train this mapping. This makes it possible to apply the same generic approach to problems that traditionally would require very different loss formulations.
Just like pix2pix, CycleGAN also uses pixel-level domain transfer and has multiple use cases.

Some applications of CycleGAN are as follows:
Using MGAN, a texture from an image can be applied to another image. For example, the texture from Van Gogh's "Olive Trees with the Alpilles in the Background" applied to other images results in the following:


Using PixelDTGAN, one can generate images of objects using an input image similar to pix2pix or CycleGAN.

In the above example, the images on the left are the input, and the images on the right are generated clothes using PixelDTGAN.
Deep Photo Style Transfer is one of the many interesting applications of GANs since the results are very mesmerizing.


According to the authors, the paper introduces a deep-learning approach to photographic style transfer that handles a large variety of image content while faithfully transferring the reference style. Their approach builds upon the recent work on painterly transfer that separates style from the content of an image by considering different layers of a neural network. However, as is, this approach is not suitable for photorealistic style transfer. Even when both the input and reference images are photographs, the output still exhibits distortions reminiscent of a painting.
Image Inpainting using Context Encoders allows computers to fill the missing parts of an image using GANs.

According to the GitHub readme, this project uses deep learning to upscale 16x16 images by a 4x factor. The resulting 64x64 images display sharp features that are plausible based on the dataset that was used to train the neural net.
The below example shows what this network can do. From left to right, the first column is the 16x16 input image, the second one is what you would get from a standard bicubic interpolation, the third is the output generated by the neural net, and on the right is the ground truth.

The application is based on Deep Convolutional Generative Adversarial Network (DCGAN).
According to the authors, 'We describe a new training methodology for generative adversarial networks. The key idea is to grow both the generator and discriminator progressively: starting from a low resolution, we add new layers that model increasingly fine details as training progresses. This both speeds the training up and greatly stabilizes it, allowing us to produce images of unprecedented quality, e.g., CelebA images at 10242.'

This is what GANs were originally intended for and looks like we've made a lot of progress in the quality of the application.
The above 14 different types of GANs applications were quite intriguing and there are certainly more applications out there. If you have recommendations to add to the list, please feel free to comment them down.
Do you want to learn Python, Data Science, and Machine Learning while getting certified? Here are some best selling Datacamp courses that we recommend you enroll in: