Generative Adversarial Networks (GANs for short) have had a huge success since they were introduced in 2014 by Ian J. Goodfellow and co-authors in the article Generative Adversarial Nets.
GANs are algorithmic architectures that use two neural networks, pitting one against the other (thus the “adversarial”) in order to generate new, synthetic instances of data that can pass for real data. They are used widely in image generation, video generation and voice generation. Their ability to dream up realistic images of landscapes, cars, cats, people, and even video games, represents a significant step in artificial intelligence.
This new project called StyleGAN2, presented at CVPR 2020 by Nvidia, uses transfer learning to generate a seemingly infinite numbers of portraits in an infinite variety of painting styles. The work builds on the team’s previously published StyleGAN project.
Because of its interactivity, the resulting network can be a powerful tool for artistic expression, the researchers at Nvidia Labs stated in the video. Users can also modify the artistic style, color scheme, and appearance of brush strokes.
StyleGAN2 Project with Tensorflow Backend by NVidia Labs
All code available at StyleGAN2 Github Repository
Team at NVLabs (NVidia Labs)
Fill out this Google Form and get a chance to be featured in this growing AI community
Q blocks is currently in private beta stage and access is provided by invitation only.
Please fill out the form below to get a chance to be one of the first few people to access Qblocks' revolutionary technology.
Copyright © Qblocks - Distributed Supercomputer for HPC