The watermark, which is invisible to the human eye, must ensure that detection systems can distinguish between AI-generated images and real ones.
When we recently searched for images for the KIJK Answers question “Have two satellites ever collided?”, we came home with a rude awakening. We did not find a realistic image of two satellites colliding in any image bank. So we asked Microsoft’s Bing image generator. Lo and behold, the above image was born.
Artificial intelligence is getting better at generating images that are indistinguishable from the real thing. Google is also aware of this and also sees the dangers of this trend. Therefore, the internet giant is working on tools that can recognize whether a photo was taken by artificial intelligence or not. The latest game, SynthID, contributes to this recognition by providing AI-generated images with an (invisible) watermark.
Read also:
picture quality
Of course, you can stick a traditional watermark on AI images, but it’s very easy to remove, as Google points out. They are also glamorous. With SynthID, the internet giant makes it possible to embed the watermark directly into the pixels of the image, so that we can’t see it. On the other hand, Google’s detection system can detect it.
However, not many details are known about the new tool yet. But Demis Hassabis, CEO of Google DeepMind, leaves that up to the technology site the edge Know that the watermark does not affect the image quality. It also adjusts if you crop or reduce the image, for example.
Safety systems
Hassabis also confirms that the first tests of SynthID are now slowly being implemented. The tool is currently only available to a limited number of Vertex AI customers, who also use Google Imagen’s AI image generator. Once Google proves the technology works, it’s “a matter of scaling it up, sharing it with other parties who enjoy using the tool, and giving more customers access to it.”
Of course, Google is not the only company concerned with “containing” artificial intelligence. Meta and OpenAI, among others, recently announced that they are building more security systems into their AI. And for the die-hards among us: A number of tech companies also use C2PA, which uses encrypted metadata to label AI-generated content. In short: they’re busy with it.
Image: Microsoft Bing AI Generator
“Thinker. Coffeeaholic. Award-winning gamer. Web trailblazer. Pop culture scholar. Beer guru. Food specialist.”
More Stories
Rewatch: Live 046 | 08/28/2024
Instagram now lets you add a song to your account
PlayStation Plus Essential Games Announced for September 2024