Concern about generative artificial intelligence technologies appears to be growing almost as quickly because the proliferation of the technologies themselves. These concerns are fueled by concern concerning the possible spread of disinformation on an unprecedented scale, in addition to by fear of job loss Loss of control over creative works and, much more futurist, an AI so powerful that it results in the extinction of the human species.
The concerns have led to calls for regulation of AI technologies. Some governments, resembling the European Union, have responded to their residents' desire for regulation, while others, resembling the United Kingdom and India, have taken a more laissez-faire approach.
In the United States, on October 30, 2023, the White House issued an executive order entitled “Safe, Secure, and Trustworthy Artificial Intelligence.” It sets out guidelines to cut back each immediate and long-term risks from AI technologies. For example, AI providers are being asked to share security test results with the federal government, and Congress is being asked to enact laws to guard consumer privacy as AI technologies suck up as much data as possible.
Given the push to control AI, it will be important to think about what regulatory approaches are feasible. There are two features to this query: what’s technologically feasible today and what’s economically feasible. It's also vital to think about each the training data that goes into an AI model and the model's output.
1. Respect copyright
One approach to regulating AI is to limit training data to public domain and copyrighted material that the AI ​​company has received permission to make use of. An AI company can resolve exactly which data samples it uses for training and may only use permitted material. This is technologically feasible.
It is partly economically feasible. The quality of the content generated by AI is determined by the quantity and richness of the training data. It is subsequently economically advantageous for an AI provider to not need to limit itself to content that it has received permission to make use of. Yet today, some firms in the sector of generative AI as a salable feature claim that they only use content that they’ve permission to make use of. An example is Adobe with its Firefly image generator.
2. Associate the output with a training data creator
Another possible technique of regulating generative AI is to attribute the output of the AI ​​technology to a particular creator – artist, singer, creator, etc. – or group of creators in order that they might be compensated. However, as a consequence of the complexity of the AI ​​algorithms used, it’s unattainable to say which input samples the output is predicated on. Even if this were possible, it could be unattainable to find out the extent to which each input sample contributed to the result.
Attribution is a vital issue since it is more likely to determine whether creators or licensees of their works embrace or oppose AI technology. The 148-day strike by Hollywood screenwriters and the resulting concessions they secured to guard against AI illustrate this problem.
In my view, this sort of regulation, which is situated on the output end of AI, is technologically unfeasible.
3. Distinguish human-generated content from AI-generated content
An immediate concern with AI technologies is that they trigger routinely generated disinformation campaigns. This has already happened to various degrees – for instance, through disinformation campaigns in the course of the Ukraine-Russia war. This is a vital concern for democracy, which is determined by a public informed by reliable news sources.
There is numerous activity within the startup space aimed toward developing technology that may distinguish AI-generated content from human-generated content, but to date this technology is lagging behind generative AI technology. The current approach focuses on identifying the patterns of generative AI, which by definition is sort of a losing battle.
This approach to regulating AI, which can also be on the output side, is currently not technologically feasible, but rapid progress on this front is probably going.
4. Attribute the output to an AI company
It is feasible to attribute AI-generated content as coming from a particular AI provider's technology. This might be achieved through the well-understood and mature technology of cryptographic signatures. AI providers could cryptographically sign all output from their systems, and anyone could confirm those signatures.
This technology is already embedded in basic computing infrastructure – for instance, when an internet browser checks a web site you hook up with. Therefore, AI firms could easily use it. Another query is whether or not it’s desirable to depend on AI-generated content from only a handful of enormous, established providers whose signatures might be verified.
Therefore, this manner of regulation is each technologically and economically feasible. The regulation is aimed toward the output side of AI tools.
It shall be vital for policymakers to know the potential costs and advantages of every type of regulation. But first they need to know which ones is technologically and economically feasible.