HomeArtificial IntelligenceWhy watermarks don't work

Why watermarks don't work

In case you haven't noticed, the rapid advancement of AI technologies has ushered in a brand new wave of AI-generated content, starting from hyper-realistic images to driving videos and text. However, this proliferation has opened Pandora's Box, unleashing a flood of potential misinformation and deception that’s testing our ability to differentiate truth from fake.

The fear that we’ll sink into synthetics is in fact not unfounded. Since 2022, AI users have created a complete of greater than 15 billion images. To put this gigantic number into perspective, by the yr 2022 it should take humans 150 years to supply the identical amount of images.

The staggering amount of AI-generated content has implications that we’re only just starting to find. Because of the sheer volume of generative AI images and content, historians can have to view the post-2023 Internet as something entirely different than what got here before, very similar to the atomic bomb set back radioactive carbon dating. Many Google image searches are already returning results of genetic AI, and increasingly we’re seeing evidence of war crimes within the Israel-Gaza conflict being denounced as AI when in point of fact it is just not.

Embedding “signatures” into AI content

For the uninitiated, deepfakes are essentially fake content generated through the usage of machine learning (ML) algorithms. These algorithms produce realistic footage by mimicking human expressions and voices, and last month's preview of Sora – OpenAI's text-to-video model – only further demonstrated how quickly virtual reality is becoming different from physical reality is to be distinguished.

Quite rightly, in a pre-emptive try to bring the situation under control and amid growing concerns, tech giants have jumped into the fray, proposing solutions to mark the flood of AI-generated content within the hope of getting the situation under control to get.

In early February, Meta announced a brand new initiative to label images created using its AI tools on platforms like Facebook, Instagram and Threads. It integrates visible markers, invisible watermarks and detailed metadata to point their artificial origin. Shortly thereafter, Google and OpenAI unveiled similar measures aimed toward embedding “signatures” into the content generated by their AI systems.

These efforts are supported by the open source Internet protocol The Coalition for Content Provenance and Authenticity (C2PA), a bunch founded in 2021 by Arm, BBC, Intel, Microsoft, Truepic and Adobe with the aim of tracing the provenance of digital files and distinguishing between real and manipulated content.

These efforts are an try to promote transparency and accountability in content creation, which in fact has a positive impact. But even when these efforts are well-intentioned, do we’ve got to walk before we are able to run? Are they sufficient to actually protect against the potential misuse of this evolving technology? Or is it an answer that arrives before its time?

Who gets to come to a decision what’s real?

I only ask because an issue arises quite quickly when developing such tools: Can detection be universal without empowering those with access to use it? If not, how can we prevent abuse of the system itself by those that control it? Once again we’re back initially and ask ourselves: Who gets to come to a decision what’s real? That's the elephant within the room, and until that query is answered, I worry that I won't be the just one who notices.

These years Edelman Trust Barometer provided necessary insights into public trust in technology and innovation. The report highlights widespread skepticism about how institutions manage innovation, showing that individuals worldwide are almost twice as more likely to imagine that innovation is being managed poorly (39%) somewhat than being managed well (22%), with a big percentage having concerns concerning the speed of technological change is just not useful for society as a complete.

The report highlights prevailing public skepticism concerning the way corporations, NGOs and governments introduce and regulate recent technologies, in addition to concerns concerning the independence of science from political and financial interests.

Despite the incontrovertible fact that technology continues to indicate that as countermeasures advance, so do the capabilities of the issues they’re tasked with addressing (and vice versa ad infinitum). We must begin to reverse most people's lack of trust in innovation if watermarks are to endure.

As we’ve got seen, this is simpler said than done. Last month, Google Gemini was heavily criticized after it manipulated images using shadow prompts (the strategy during which the AI ​​model takes a prompt and adapts it to a selected bias). A Google worker used the . Apologies followed, however the damage was done.

Shouldn’t CTOs know what data models are getting used?

Recently, a video went viral interviewing Mira Murati, CTO of OpenAI. In the clip she is asked what data was used to coach Sora – Murati answers with “publicly available data and licensed data”. When asked further about exactly what data was used, she admits she's probably not sure.

Given the large importance of coaching data quality, one would assume that that is the core query a CTO must know when deciding to speculate resources in a video transformer. Her subsequent abandonment of the road of questioning (in an otherwise very friendly interview, I would add) also sets alarm bells ringing. The only two reasonable conclusions from the clip are that she is either a lackluster CTO or a lying CTO.

Of course there will likely be many more episodes like this as this technology is adopted en masse, but when we would like to reverse the trust deficit we’d like to make sure some standards are in place. Public education about what these tools are and why they’re needed could be a great start. Uniform labeling of things – with measures that hold individuals and organizations responsible when things go flawed – could be one other welcome addition. Furthermore, when something inevitably goes flawed, there must be an open discussion about why something happened. Transparency in all processes is crucial.

Without such measures, I fear that the watermark will likely be little greater than a band-aid and fails to deal with the underlying problems with misinformation and lack of trust in synthetic content. Rather than functioning as a sturdy authenticity verification tool, it might be merely a symbolic gesture, almost definitely to be circumvented by those intent on deceiving or just ignored by those that assume they’ve already done so .

As we’ll see (and in some places already seeing), deepfake election interference will likely be the defining genetic AI story of the yr. With greater than half the world's population voting and public trust in institutions still at an all-time low, that is the issue we’d like to resolve before we are able to expect something like content watermarks to not go down, but swim.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read