Scalable image generation and super resolution using generative adversarial networks


Creative Commons License

GÜZEL TURHAN C., BİLGE H. Ş.

JOURNAL OF THE FACULTY OF ENGINEERING AND ARCHITECTURE OF GAZI UNIVERSITY, cilt.35, sa.2, ss.953-966, 2020 (SCI-Expanded) identifier identifier

Özet

Generative adversarial training has been one of the most active research topics and many researchers have conducted their studies on Generative Adversarial Network (GAN) shortly after it is claimed to be one of the most promising research area of the last decade by pioneers of the deep learning community. On the other hand, the idea behind generators has also reemerged autoencoder models such as Variational Autoencoder (VAE). Therefore, autoencoder models have gained their popularity back. Some restrictions of GAN models such as lack of inference mechanism, GAN and VAE based hybrid models have proposed addressing image generation. With the effect of these notions and studies, we have also considered VAE and GAN hybrid models. For obtaining synthetic but at the same time high-resolution handwritten-looking images without any training, Compositional Pattern Producing Network (CPPN) is adapted from previous studies for combining with VAE and adversarial training. For improving generation capabilities, an objective from a previous VAE/GAN model is also adapted for our VAE/CPGAN hybrid model. For analyzing the proposed model performance, baseline models such as GAN, VAE and VAE/GAN are also evaluated for comparisons. In this paper. it is clearly seen the proposed model is capable of the generating realistic and scalable super resolution synthetic images on a common dataset.