Google Photos will now show you when images have been edited with Magic Editor’s Reimagine tool.
Google is making it easier to tell when images have been modified with AI by adding invisible digital watermarks to pictures edited with its powerful Reimagine tool.
February 9 update below: This article was originally posted on February 7
Reimagine, launched with the Pixel 9 series as part of the Google Photos Magic Editor feature, lets you circle any region in an image and transform it into anything you like by typing a simple text prompt. These AI-generated edits are often creative and amusing but are potentially misleading, as it can become difficult to tell what’s real and what is not.
Google Photos: AI Generated Images Will Be Watermarked
Now, as revealed in a recent Google Photos blog post, images manipulated with Reimagine will receive an invisible embedded watermark using a Google DeepMind technology called SynthID that will alert others to the fact that AI-powered modifications are present. If a picture contains a SynthID watermark, you’ll be able to tell by checking its associated “About this image” information.
This information should make it much easier to spot fake images on social media or sent via messaging apps — if they were edited with Google’s AI tools.
Google Photos will now show you when the Magic Editor Reimagine feature has been used to transform … [+]
For example, images edited with Reimagine will display an “AI info” section in Google Photos stating “Credit: Edited with Google AI, Digital source type: Edited using Generative AI.” You can also view this information outside of Google Photos: If someone sends you a suspicious-looking photo, just use Circle to Search to examine the image and check for AI-generated elements as above.
SynthID watermarks can be read only by specific decoder software and aren’t visible to the naked eye. Because they form part of the image itself, they’re harder to remove than regular image tags, which anyone can easily strip out of image files, deliberately or accidentally when using software that doesn’t preserve them.
Google already uses SynthID to watermark pictures created with its Imagen image generation tool. However, it has now added the feature to pictures, AI-generated or not, that users have edited with Reimagine. Reimagine is currently available only on the Pixel 9 series and newer, but all devices can read the SynthID watermarks.
Google Photos AI Watermarks — An Imperfect Solution
SynthID watermarks are designed to be impervious to casual editing and manipulation and are hard to remove. However, repeated edits can cause them to degrade. Furthermore, Google notes that minor edits, such as “changing the color of a small flower in the background of an image,” may still slip through the cracks and avoid watermarking, as the changes may be too small for SynthID to detect.
In addition to images, SynthID can also watermark audio, text and video. Software to encode and decode SynthID watermarks in text is already publicly available, but Google is keeping its image-based watermarking tools under wraps for now. This approach helps slow down those who might develop ways to circumvent the technology but also means that, without open scrutiny, we can’t tell what other information Google might choose to embed in our images with invisible SynthID watermarks.
February 9 update: SynthID must become more widely available if it is to be truly useful
Google Must Make SynthID Tools More Widely Available
While Google’s moves to add watermarks to its AI-generated tools are to be commended, SynthID for images is just one small weapon in the fight against deepfakes and AI-generated disinformation. One significant limiting factor is that, unlike text documents, only Google can detect Synth ID watermarks in images. This means that while Google Photos, Circle to Search and Google Lens will flag SynthID watermarked images, other software and services will not.
Meta requires users to label certain realistic content that’s made with AI
If you upload a “Reimagined” image to Instagram, for example, the app won’t detect that your picture contains AI-generated components. Instagram prompts users to add an AI label to uploaded images where appropriate, noting that Meta requires such labeling to be used under certain circumstances. However, the decision to activate the label is still left up to the uploader.
The same does not apply to other tools, such as Adobe Photoshop, which uses industry-standard tags that services like Instagram can read to label AI-generated content automatically. To ensure wider adoption, Google must make SynthID decoders available to third parties for both text and images.
Casual users of Magic Editor are unlikely to care, or even know, about Google’s watermarking. Still, those who set out to deliberately misinform will almost certainly be able to circumvent it. The many free and open-source tools available do not, and never will, compel users to embed tags or watermarks that identify their output as AI-generated.
Follow @paul_monckton on Instagram.