HomeIndustriesAI corporations from major technology corporations join the Coalition for Secure AI...

AI corporations from major technology corporations join the Coalition for Secure AI (CoSAI)

Some of the most important names in Big Tech have come together to form the Coalition for Secure AI (CoSAI).

There is not any global standard for secure AI development practices yet. Current AI security measures are fragmented and sometimes managed internally by the businesses constructing AI models.

CoSAI is an open source initiative of the worldwide standards body OASIS with the goal of standardizing and sharing best practices related to the secure development and deployment of AI.

Big Tech corporations supporting the initiative include Google, IBM, Intel, Microsoft, NVIDIA, and PayPal, amongst others. Other founding sponsors include Amazon, Anthropic, Cisco, Chainguard, Cohere, GenLab, OpenAI, and Wiz.

Apple and Meta are noticeably absent.

The aim of CoSAI is to develop and disseminate comprehensive security measures to deal with the next risks:

  • steal the model
  • Data poisoning of coaching data
  • Injecting malicious inputs through fast injection
  • Scaled abuse prevention
  • Membership inference attacks
  • Model inversion attacks or gradient inversion attacks to derive private information
  • Extracting confidential information from the training data

Statutes of the CoSAI states that “the next topics usually are not inside the scope of the project: misinformation, hallucinations, hateful or offensive content, bias, malware generation, phishing content generation, or other content security issues.”

Google already has its Google Secure AI Framework (SAIF) and OpenAI has its competitive Alignment project, but until the creation of CoSAI, there was no forum to bring together AI security best practices independently developed by industry players.

We have seen small startups like mistral are experiencing meteoric rise with the AI ​​models they develop, but a lot of these smaller corporations don’t have the resources to fund AI security teams.

CoSAI might be a precious free source of AI safety best practices for industry players large and small.

Heather Adkins, vice chairman and cybersecurity resilience officer at Google, said: “We have been using AI for a few years and see the continued potential for defenders, but in addition recognize the opportunities for adversaries.

“CoSAI will help organizations large and small integrate AI safely and responsibly – helping them realize its advantages while minimizing risks.”

Nick Hamilton, Head of Governance, Risk and Compliance at OpenAI, said: “Developing and delivering secure and trusted AI technologies is central to OpenAI’s mission.

“We imagine that developing robust standards and practices is crucial to making sure the secure and responsible use of AI, and we’re committed to working together across the industry to that end.

“By participating in CoSAI, we aim to contribute our expertise and resources to assist create a secure AI ecosystem that advantages everyone.”

Let us hope that individuals like Ilya Sutskever and others who’ve left, OpenAI voluntarily pass on their contribution to CoSAI on account of security concerns.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read