The purpose of this website/guide is to help you identify visible signs that an image may be AI-generated. By familiarizing yourself with common artifacts and errors in AI images, you can develop a discerning eye for differentiating AI-generated content from real photographs. This website/guide will thoroughly explain the common mistakes you should look out for when viewing AI-generated images.
The Creation of AI-Generated Content
Photorealistic AI-generated images started becoming popular with the creation of Generative Adversarial Networks (GANs) in 2014. GANs were designed by Ian Goodfellow and his team and work by having two neural networks compete with each other. One network (the generator) creates images, while the other (the discriminator) evaluates them, helping the generator improve over time. By 2018, GANs were able to produce incredibly lifelike face images, which were then featured on websites like This Person Does Not Exist. From 2014 to 2021, GANs improved by a lot, getting better at creating not just images but also text, audio, and video. They became famous for making high-resolution images, transforming photos into paintings, and even predicting future frames in videos.
By 2024, a new type of AI model called the Diffusion Model became the best at making highly detailed and realistic images. These models are used by systems like DALL-E, Midjourney, and Stable Diffusion and can create images that look even more real than those made by GANs.