HomeArtificial IntelligenceGoogle will begin labeling AI-generated images in search later this yr

Google will begin labeling AI-generated images in search later this yr

Google says that the corporate plans to introduce changes to Google Search to make it clearer which images in the outcomes were generated by AI or edited by AI tools.

Over the subsequent few months, Google will begin labeling AI-generated and edited images within the About This Image window in Search, in Google Lens, and within the Circle to Search feature on Android. Similar disclosures might also be made on other Google sites, similar to YouTube, in the longer term. Google says it should announce more on this later this yr.

Crucially, images containing “C2PA metadata” might be flagged as AI-manipulated in searches. C2PA, short for Coalition for Content Provenance and Authenticity, is a gaggle that develops technical standards to trace the history of a picture, including the equipment and software used to capture and/or create the image.

Companies like Google, Amazon, Microsoft, OpenAI and Adobe support C2PA. However, the coalition's standards haven’t found widespread adoption. As The Verge reports, noticed A recent article states that C2PA faces quite a few challenges by way of adoption and interoperability, with only a handful of generative AI tools and cameras from Leica and Sony supporting the group's specifications.

Furthermore, C2PA metadata—like every other metadata—might be removed or deleted, or corrupted beyond readability. And images from among the more popular generative AI tools, similar to Flux, which xAI's Grok chatbot uses to generate images, are tagged with C2PA metadata partially because their creators haven’t agreed to support the usual.

Admittedly, some measures are higher than none, as deepfakes proceed to spread rapidly. According to a treasureThere was a 245% increase in AI-generated content fraud from 2023 to 2024. Deloitte Projects that losses brought on by deepfakes will increase from $12.3 billion in 2023 to $40 billion in 2027.

Surveys show that the vast majority of individuals are afraid of falling for a deepfake and that AI has the potential to facilitate the spread of propaganda.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read