HomeNewsThe United Nations has a plan to manage artificial intelligence – but...

The United Nations has a plan to manage artificial intelligence – but have they taken the industry’s hype to heart?

The United Nations Secretary-General’s Advisory Committee on Artificial Intelligence (AI) has Final report to manage AI for humanity.

The report presents a plan to deal with AI-related risks without compromising the potential of this technology. It also features a call for all governments and stakeholders to work together to manipulate AI to advertise the event and protection of all human rights.

On the surface, this report appears to be a positive step forward for AI, encouraging developments while mitigating potential harms.

However, the report's finer details raise quite a few concerns.

Reminds of the IPCC

The UN Advisory Committee on AI met for the primary time on October 26, 2023. The aim of the committee is to develop recommendations for the international governance of AI. It is alleged that this approach is mandatory to be sure that the advantages of AI – equivalent to opening up recent areas of scientific research – are shared evenly, while the risks of this technology – equivalent to mass surveillance and the spread of misinformation – are mitigated.

The Advisory Board consists of 39 members from different regions and professionsThese include industry representatives from Microsoft, Mozilla, Sony, Collinear AI and OpenAI.

The Committee recalls the United Nations Intergovernmental Panel on Climate Change (IPCC), whose aim is to make necessary contributions to international negotiations on climate change.

The inclusion of outstanding industry representatives within the AI ​​Advisory Board is a departure from the IPCC. This can have benefits, equivalent to a more in-depth understanding of AI technologies. But it will probably even have disadvantages, equivalent to biased viewpoints in favor of economic interests.

The recent publication of the ultimate report on Governing AI for Humanity provides necessary insights into what we will expect from this committee.

What does the report say?

The Final report on controlling AI for humanity follows a Interim report might be published in December 2023It accommodates seven recommendations to deal with gaps in current AI governance arrangements.

These include establishing an independent international scientific body on AI, establishing an AI standards exchange and creating a world AI data framework. The report also ends with a call for all governments and relevant stakeholders to jointly regulate AI.

What is worrying in regards to the report are the unbalanced and sometimes contradictory claims it accommodates.

For example, the report rightly calls for governance measures to deal with the impact of AI on concentrated power and wealth, in addition to its geopolitical and geoeconomic consequences.

However, it’s also claimed that:

Currently, nobody understands the inner workings of AI sufficiently to totally control its outcomes or predict its evolution.

This claim is factually incorrect in some ways. It is true that there are some “black box” systems – those where the inputs are known however the computational process used to generate the outputs is unknown. But AI systems basically are well understood at a technical level.

AI reflects a Range of skillsThis spectrum ranges from generative AI systems like ChatGPT to deep learning systems like facial recognition. The assumption that each one of those systems embody the identical level of impenetrable complexity is wrong.

The inclusion of this claim calls into query the advantages of including industry representatives on the Advisory Board, as they need to bring a more in-depth understanding of AI technologies.

The other problem this claim raises is the concept that AI evolves by itself. What is interesting in regards to the rise of AI lately is the accompanying narratives that falsely portray AI as a system of motion.

This misrepresentation shifts perceived liability and responsibility away from those that design and develop these systems and provides the industry with a creative scapegoat.

Despite the subtle undertone of powerlessness towards AI technologies and the unbalanced claims made in all places, the report does in some ways contribute to the positive discourse.

A small step forward

Overall, the report and its call to motion represent a positive step forward by emphasizing that AI will be managed and controlled, despite conflicting claims throughout the report suggesting the other.

The inclusion of the term “hallucinations” is a striking example of those contradictions.

Sam Altman popularized the concept of ​​AI hallucination.
Markus Schreiber/AP

The term itself was popularized by OpenAI CEO Sam Altman when he used the term to reinterpret nonsensical results as a part of the “magic” of AI. Hallucinations just isn’t a technically recognized term – It is a creative marketing agendaPushing for AI control while supporting a term that suggests a technology that can’t be controlled just isn’t constructive.

What the report lacks is uniformity within the perception and understanding of AI.

There can also be an absence of application specificity – a standard limitation of many AI initiatives. A worldwide approach to AI governance will only work whether it is in a position to capture the nuances of application and domain specificity.

The report is a step in the correct direction, but it surely still must be refined and expanded to be sure that it encourages developments while curbing the numerous harms of AI.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read