HomePolicyTech corporations world wide are committing to recent voluntary rules

Tech corporations world wide are committing to recent voluntary rules

Leading AI corporations have agreed to a series of recent voluntary security commitments announced by the British and South Korean governments ahead of a two-day AI summit in Seoul.

The commitments involve 16 technology corporations, including AmazonGoogle, Meta, Microsoft, OpenAIxAI and Zhipu AI These

Under the commitments, corporations agree to not “develop or deploy a model in any respect” if serious risks can’t be managed.

Companies have also agreed to reveal how they’ll measure and mitigate the risks related to AI models.

The recent commitments come after major AI researchers including Yoshua Bengio, Geoffrey Hinton, Andrew Yao and Yuval Noah Harari published one Paper named in science Addressing extreme AI risks while making rapid progress.

This paper made several recommendations to guide the brand new security framework:

  • Overview and honesty: Developing methods to be sure that AI systems are transparent and produce reliable results.
  • robustness: Ensuring that AI systems behave predictably in recent situations.
  • Interpretability and transparency: Understand AI decision-making processes.
  • Inclusive AI development: Alleviate prejudices and integrate different values.
  • Rating for dangerous actions: Develop rigorous methods to evaluate AI capabilities and predict risks before deployment.
  • AI Alignment Assessment: Ensuring that AI systems are aligned with intended goals and don’t pursue harmful goals.
  • Risk assessments: Comprehensive assessment of the societal risks related to using AI.
  • Resilience: Creating defenses against AI-powered threats equivalent to cyberattacks and social manipulation.

Anna Makanju, Vice President of Global Affairs at OpenAIcommented on the brand new recommendations: “The field of AI security is rapidly evolving, and we’re particularly pleased to support the commitments’ give attention to refining approaches alongside science. We remain committed to working with other research labs, corporations and governments to be sure that AI is protected and advantages all of humanity.”

Michael Sellitto, Head of Global Affairs at Anthropic, similarly commented: “Frontier AI’s security commitments underscore the importance of protected and responsible development of Frontier models. As a security-focused company, we make it a priority to implement strict policies, conduct extensive red teaming, and work with external experts to make sure our models are secure. These commitments are a crucial step forward in promoting responsible AI development and deployment.”

Another voluntary framework

This reflects the “voluntary commitments”. White House in July last 12 months from Amazon, AnthropoceneGoogle, Inflection AI, Meta, Microsoft and OpenAI Promote the protected and transparent development of AI technology.

These recent rules say the 16 corporations will “ensure public transparency” about their security measures unless doing so would increase risks or reveal sensitive business information disproportionate to the societal profit.

British Prime Minister Rishi Sunak said: “It is a world first that so many leading AI corporations from so many various parts of the world are all agreeing on the identical commitments to AI security.”

It is a world first because corporations outside North America, equivalent to Zhipu.ai, joined him.

However, voluntary commitments to AI security have been trending for a while. It's not a giant risk for AI corporations to comply with them because they will't be enforced. This also shows how blunt these commitments are when the going gets tough.

Dan Hendrycks, the safety consultant at Elon Musk's startup xAI, noted that the commitments would help “lay the muse for concrete national regulation.”

A good comment, but by their very own admission we still must “lay the groundwork” when extreme risks threaten, in keeping with some leading researchers.

Not everyone agrees on that how dangerous AI really isbut the purpose stays that the sentiment behind these frameworks doesn’t yet match the actions.

Nations establish AI security network

As this smaller AI security summit begins in Seoul, South Korea, ten nations and the European Union (EU) agreed to ascertain a global network of publicly funded “AI security institutes.”

The “Seoul Memorandum of Understanding on International Cooperation in AI Safety ScienceThe agreement involves countries equivalent to the United Kingdom, the United States, Australia, Canada, France, Germany, Italy, Japan, South Korea, Singapore and the EU.

Notably, China was absent from the agreement. However, the Chinese government participated and a Chinese company, Zhipu.ai, signed the framework described above.

China has previously expressed its willingness to cooperate on AI security and held “secret” talks with China US.

The smaller intermediate peak arrived with less fanfare than the primary event, held at Britain's Bletchley Park last November.

However, several well-known tech personalities joined, including Elon Musk, former Google CEO Eric Schmidt and DeepMind Founder Sir Demis Hassabis.

Further commitments and discussions shall be announced in the approaching days.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read