Late Friday afternoon, a window of time corporations typically reserve for unflattering revelations, AI startup Hugging Face said its security team had detected “unauthorized access” earlier this week to Spaces, Hugging Face’s platform for creating, sharing and hosting AI models and resources.
In a blog entryHugging Face said the breach was related to Spaces secrets, the private information that serves as keys to unlocking protected resources equivalent to accounts, tools and development environments, and that it “suspects” that some secrets could have been accessed without authorization by third parties.
As a precautionary measure, Hugging Face has revoked various tokens in these secrets. (Tokens are used for identity verification.) Hugging Face says users whose tokens were revoked have already received an email notification, and recommends that every one users “update any keys or tokens” and consider switching to fine-grained access tokens, which Hugging Face says are safer.
It was not immediately clear what number of users or apps were affected by the potential breach.
“We are working with third-party cybersecurity forensics experts to analyze the problem and review our security policies and procedures. We have also reported this incident to law enforcement and data protection authorities,” Hugging Face wrote within the post. “We deeply regret any disruption this incident could have caused and understand the inconvenience it could have caused you. We promise to make use of this as a possibility to strengthen the safety of our entire infrastructure.”
In an emailed statement, a Hugging Face spokesperson told TechCrunch:
“We have seen a big increase within the variety of cyberattacks in recent months, likely because our usage has increased significantly and AI is becoming more mainstream. It is technically difficult to understand how many secret spaces have been compromised.”
The possible hacking of Spaces comes at a time when Hugging Face – one in every of the biggest platforms for collaborative AI and data science projects with over one million models, datasets and AI-powered apps – is facing increasing scrutiny over its security practices.
In April, researchers at cloud security firm Wiz found a vulnerability – now fixed – that allowed attackers to execute arbitrary code during construct time of an app hosted by Hugging Face, allowing them to look at the network connections of their machines. Earlier this 12 months, security firm JFrog uncovered Evidence that code uploaded to Hugging Face secretly installed backdoors and other varieties of malware on end-user machines. And security startup HiddenLayer identified ways to take advantage of Hugging Face's supposedly safer Safetensors serialization format. abused to create sabotaged AI models.
Hugging face said recently that it’ll partner with Wiz to leverage the corporate's vulnerability scanning and configuration tools for cloud environments “with the goal of improving security across our platform and the whole AI/ML ecosystem.”