Generating High Quality Images using StyleGAN2:



Generative Adversarial Networks (GANs for short) have had a huge success since they were introduced in 2014 by Ian J. Goodfellow and co-authors in the article Generative Adversarial Nets.

GANs are algorithmic architectures that use two neural networks, pitting one against the other (thus the “adversarial”) in order to generate new, synthetic instances of data that can pass for real data. They are used widely in image generation, video generation and voice generation. Their ability to dream up realistic images of landscapes, cars, cats, people, and even video games, represents a significant step in artificial intelligence.

This new project called StyleGAN2, presented at CVPR 2020 by Nvidia, uses transfer learning to generate a seemingly infinite numbers of portraits in an infinite variety of painting styles. The work builds on the team’s previously published StyleGAN project.



Who can benefit from this project?

Because of its interactivity, the resulting network can be a powerful tool for artistic expression, the researchers at Nvidia Labs stated in the video. Users can also modify the artistic style, color scheme, and appearance of brush strokes.



Demo:



Algorithm/Framework used for the project:

StyleGAN2 Project with Tensorflow Backend by NVidia Labs





Code:

All code available at StyleGAN2 Github Repository



Made By:

Team at NVLabs (NVidia Labs)




Want GPU Instances at 5X lower cost?
If yes, then sign up: 🙌

Get access




Get your AI project featured

Fill out this Google Form and get a chance to be featured in this growing AI community