Generative AI invents things. It could also be biased. Sometimes it spits out toxic text. So can or not it’s “protected”?
Rick Caccia, CEO of WitnessAIbelieves it will probably.
“Securing AI models is an actual problem, and it’s one which is especially essential to AI researchers, however it is different from securing use,” said Caccia, former SVP marketing at Palo Alto Networks, in a Interview with TechCrunch. “I imagine it like a sports automotive: a more powerful engine – i.e. a more powerful model – is of no use to you unless you furthermore may have good brakes and good steering. The controls are only as essential for driving fast because the engine.”
There is actually a necessity for such controls in corporations. While they’re cautiously optimistic concerning the productivity-enhancing potential of generative AI, they’ve concerns about the restrictions of this technology.
51 percent of CEOs are hiring for generative AI positions that didn't exist until this yr, comparable to at IBM Opinion poll However, in accordance with Riskonnect, only 9% of corporations say they’re prepared to deal with threats – including privacy and mental property threats – arising from their use of generative AI Opinion poll.
WitnessAI's platform intercepts and addresses activity between employees and the custom generative AI models their employer uses – not models stored behind an API like OpenAI's GPT-4, but more along the lines of Meta's Llama 3 risk-reducing policies and protective measures.
“One of the guarantees of enterprise AI is that it would unlock and make corporate data accessible to employees so that they can do their jobs higher. But releasing all of that sensitive data – or having it leaked or stolen – is an issue.”
WitnessAI sells access to several modules, each focused on combating a distinct type of risk through generative AI. A module allows organizations to implement rules to forestall employees on certain teams from using generative AI-powered tools in ways they aren’t allowed to make use of (e.g. asking for earnings reports before publishing or inserting internal code bases ). Another module removes proprietary and confidential information from the prompts sent to models and implements techniques to guard models from attacks that might force them to deviate from the script.
“We consider one of the best solution to help organizations is to define the issue in a way that is sensible – for instance, by safely adopting AI – after which sell an answer that fixes the issue,” said Caccia. “The CISO wants to guard the organization, and WitnessAI helps them do this by ensuring data privacy, stopping quick infiltration, and enforcing identity-based policies. The Chief Privacy Officer wants to make sure compliance with existing and future regulations, and we give them visibility and a solution to report on activity and risk.”
However, from a privacy perspective, there’s one tricky thing about WitnessAI: all data passes through its platform before reaching a model. The company is transparent about this and even offers tools to watch which models employees access, what questions they ask the models, and what answers they receive. However, this might pose its own privacy risks.
Responding to questions on WitnessAI's privacy policy, Caccia said the platform is “isolated” and encrypted to forestall customer secrets from becoming public.
“We have developed a platform with millisecond latency and built-in regulatory separation – a singular, isolated design to guard enterprise AI activities in a way that’s fundamentally different from typical multi-tenant software-as-a-service services. ” he said. “We create a separate instance of our platform for every customer, encrypted with their keys. Your AI activity data is isolated to you – we are able to’t see it.”
Maybe this can allay customers' fears. As for the employees fearful As far because the surveillance potential of the WitnessAI platform goes, that's a tougher call.
Surveys show that individuals generally don’t appreciate having their workplace activities monitored, no matter the rationale, and consider it has a negative impact on company morale. Almost a 3rd of those surveyed in a Forbes study Opinion poll said they could consider quitting their job if their employer monitored their online activities and communications.
However, Caccia claims that interest in WitnessAI's platform has been and stays strong, with a pipeline of 25 early enterprise adopters within the proof-of-concept stage. (It won't be generally available until the third quarter.) And in a vote of confidence from VCs, WitnessAI has raised $27.5 million from Ballistic Ventures (which founded WitnessAI) and GV, Google's corporate enterprise arm.
The plan is to make use of the funding tranche to grow WitnessAI's 18-person team to 40 employees by year-end. Growth will definitely be key to beating WitnessAI's competitors within the emerging space of model compliance and governance solutions, not only from tech giants like AWS, Google, and Salesforce, but in addition startups like CalypsoAI.
“We set our plan to get better into 2026 even when we had no sales in any respect, but we have already got almost 20 times the pipeline we’d like to satisfy our sales targets this yr,” Caccia said. “This is our first round of funding and our public launch, but securely enabling and using AI is a brand new area and all of our capabilities are evolving with this recent market.”