AI-generated images are now being used by media and businesses (see Levi’s new campaign). This already complicates the lives of photographers who feel threatened with being replaced (the irony being that their own online photos have been used to train these AIs without their consent) but also users who must learn to detect the true from the false.

Learning to recognize them will allow you to avoid several risks because these clichés can be used to deceive people or for malicious activities such as online fraud, data manipulation, harassment…

On my animal photo library, when buying a photo, I suggest to authenticate them by sharing the EXIF ​​data stored in the metadata of the image file. On request, I can also send you a screenshot of the original RAW file (photo before processing).

1. Check metadata

It is possible to check if a photo is fake using metadata, but it depends on how the photo has been edited. Metadata is information embedded in image files, such as the creator’s name, copyright, date of creation, GPS coordinates, camera used and its settings. Additionally metadata can tell if a photo has been modified by looking at the creation and modification dates. If the dates don’t match, or if the date modified is later than the date created, it may indicate that the photo has been edited. However, this cannot indicate exactly which part of the photo has been altered. Regarding location, if the metadata of the photo indicates that it was taken at a specific location, it can help to verify whether the photo is genuine or not. If the photo was taken in a place where it could not have been taken (for example, if it shows an animal that is not present in that area), this may be a sign that the photo is fake. Finally, photo metadata may also indicate the model and settings of the device used to take the photo. If these settings are very different from those expected for the shot in question (for example, if the settings show a very small aperture for a landscape photo), this may indicate that the photo has been edited. However, it’s important to note that metadata can be intentionally deleted or altered, making it difficult to rely solely on metadata to verify a photo’s authenticity. Free tools to check photo metadata: dcode fotoforensics

Stock photo companies like Alamy that admit AI images into their libraries require contributors to tag their photos in metadata as AI-generated; in the image title, description, and image tags (making it easy to find or exclude AI-created photos when browsing their catalogs). Finding these tags is the easiest way to spot an AI-generated image.

EXIF metadata on Lightroom
Example metadata of a photo visible in Adobe Lightroom

2. Find the original image

Do a reverse image search to verify the original image in its original context. For this, you can use reverse image search tools, such as Google Images or TinEye, to see if the image has appeared elsewhere on the internet. If it has been used in different contexts, this may indicate that the image is fake or taken out of its original context. When tracing back to the original source, if used in articles or publications, also check the credibility of those sources. Reverse lookup is also a good way for photographers to check that their images aren’t being used without their permission. Because the fact that their photographs are freely accessible does not give the right to reproduce or distribute them without their authorization. Know more…

3. Using tools that detect if the photo has been manipulated by AI or Photoshop

FotoForensics uses an advanced algorithm to decode all possible photoshopped images and manipulations. It uses Error Level Analysis (ELA) to identify areas of an image that are at different levels of compression. With JPEG images, the entire image should be at roughly the same error level. If a section of the image is at a significantly different error level, it likely indicates a digital change. FotoForensics aims to simplify the evaluation process. It works like a microscope – highlighting artifacts and details that the human eye may not be able to identify. Between having the right tools and training, most people can quickly learn how to assess images. FotoForensics provides online tutorials for self-paced learning. More recently, Google offers an AI classification tool that analyzes images to classify content and assign tags to them. This will allow Google to automatically detect AI-generated images. A free demo of Google Vision AI is available until June 2023. This analysis tool can classify all elements of an image in an automated way (emotions on faces, objects, texts…), but can be used on its own to see how an image detection algorithm “perceives” your images and if they are applicable for SEO.

Other free detection tools:

fake photo without metadata
No visible metadata on this fake photo created by Eldagsen but which deceived the jury at the Sony world photography awards
EXIF metadata of a photo
Example of metadatas on a real photo
Differences in contrast, compression and white edge to identify retouching

4. Look for anomalies in the image

A good way to spot an AI-generated image is to look for anomalies: visual errors caused by the imperfect functioning of machine learning algorithms during the creation process. Glossy textures, irregular eyes, teeth that don’t look natural, missing or deformed body parts because AI generators are still struggling to reproduce human hands, furnitures or glasses that merge with the person’s face, text inside the image, cats with a queue in the wrong place, etc.

Other visual distortions may not be obvious, so you should look carefully. Missing or mismatched texts, blurred background where there shouldn’t be, blurs that don’t look intentional, incorrect lighting and shadows, etc.

If you find any of these in an image, you’re most likely looking at an AI-generated image.
However, generative AI models, like Midjourney or Dall E, seem to release an improved version of their apps every day, producing higher quality images every time. Therefore, it is still possible for a decent looking image without visual errors to be produced by the AI.

cat on a volcanic island
Issues on eyes, legs and horizon
Issues on face and bodies
white cat on a cloud
Surrealistic proportions for this "cat on a cloud"

Going further, is there copyright for an AI-generated image?

Digital art, images, poems and books generated using tools such as DALL-E, Stable Diffusion, Midjourney and ChatGPT-4 are not copyrighted if they were created by humans. So if you’re generating content using AI tools, you don’t own any. For the moment…
See on this subject the case of the comic book Zarya of the Dawn illustrated by the IA Midjourney which was finally refused its copyright. UPDATE 08/28/2023: US court confirms AI-generated art cannot receive copyrights
You’re a photographer ? Discover in this article how to protect your work against AI. My AI artwork above available here on Getty Images and used for the podcast on France Inter Intelligence artificielle : les Français sont-ils bien placés ?

Jacques Julien

French photographer based in Paris specialised in black and white photographs, animal photos, architecture, portraits.