Google launches tool to combat AI deepfakes

Google has launched a watermarking tool that labels if an image was AI-generated, in a bid to stop the proliferation of AI-generated deepfake images.

The tool was developed by Google’s AI lab, Google DeepMind, and is called SynthID, according to a report in MIT Technology Review.

The watermarking technology will initially be available only to users of Google’s AI image generator Imagen, hosted on Google Cloud’s machine learning platform Vertex.

Imagen users will be able to add watermarks to images generated in the program using AI.

Google DeepMind expect that the tool will help people tell when AI-generated content is being passed off as real, as well as helping protect copyright. 

According to Google DeepMind, traditional watermarks aren’t sufficient for identifying AI-generated images "because they’re often applied like a stamp on an image and can easily be edited out." 

Digital watermarking, which involves hiding a signal in a piece of text or an image to identify its origin, is one of the main solutions being proposed to prevent the risk of deepfakes and copyright infringement from the rise of generative AI.

Google DeepMind said it designed SynthID not to interfere with image quality while allowing the watermark to remain detectable, even after modifications like adding filters or changing colors. The tool uses two deep learning models — for watermarking and identifying — that have been trained together on a diverse set of images.

The US government announced this summer its plans to develop watermarking tools in collaboration with Big Tech to mitigate against the risk posed by AI models to spread misinformation.

According to the MIT report, Google DeepMind is the first Big Tech company to publicly launch such a tool.