Tags:Compositional Pattern-Producing Networks, Generative Adversarial Networks and Image Generation
Abstract:
In contrast to most recent models that generate an entire image at once, the paper introduces a new architecture for generating images one pixel at a time using a Compositional Pattern-Producing Network (CPPN) as the generator part in a Generative Adversarial Network (GAN), allowing for effective generation of visually interesting images with artistic value, at arbitrary resolutions independent of the dimensions of the training data. The architecture, as well as accompanying (hyper-)parameters, for training CPPNs using recent GAN stabilisation techniques is shown to generalise well across many standard datasets. Rather than relying on just a latent noise vector (entangling various features with each other), mutual information maximisation is utilised to get disentangled representations, removing the requirement to use labelled data and giving the user control over the generated images. A web application for interacting with pre-trained models was also created, unique in the offered level of interactivity with an image-generating GAN.
Interactive, Efficient and Creative Image Generation Using Compositional Pattern-Producing Networks