distributionan AI testing platform founded by Scott Clark, the previous GM of AI software at Intel, has closed a $19 million Series A funding round led by Two Sigma Ventures.
Clark says Distributional was inspired by the AI ​​testing problems he encountered while applying AI at Intel and — before that — his work at Yelp as a software lead in the corporate's ad targeting department.
“As the worth of AI applications continues to grow, so do operational risks,” he told TechCrunch. “AI product teams use our platform to proactively and constantly discover, understand and address AI risks before they develop into risks in production.”
Clark got here to Intel through an acquisition.
In 2020, Intel acquired SigOpt, a model experimentation and management platform that Clark co-founded. Clark remained in office and was named VP and GM of Intel's AI and supercomputing software group in 2022.
At Intel, Clark says he and his team were often hamstrung by AI monitoring and observability issues.
AI is just not deterministic, Clark emphasized – meaning that it generates different results given the identical data. Additionally, AI models have many dependencies (e.g. software infrastructure and training data), and finding bugs in an AI system can feel like searching for a needle in a haystack.
According to a 2024 Rand Corporation Opinion pollover 80% of AI projects fail. Generative AI represents a selected challenge for corporations Gartner study predicts that a 3rd of missions will probably be abandoned by 2026.
“It requires writing statistical tests on distributions of many data properties,” Clark said. “AI must test constantly and adaptively throughout its lifecycle to detect changes in behavior.”
Clark launched Distributional to attempt to abstract a few of this AI audit work, drawing on techniques he and the SigOpt team developed while working with enterprise customers. Distributional can mechanically create statistical tests for AI models and apps in line with a developer's specifications and organize the outcomes of those tests in a dashboard.
From this dashboard, distributional users can collaborate on test “repositories,” triage failed tests, and recalibrate tests if crucial. The entire environment may be deployed on-premises (although Distributional also offers a managed plan) and integrated with popular alerting and database tools.
“We provide transparency across the organization about what, when and the way AI applications have been tested and the way this has modified over time,” said Clark, “and we offer a repeatable process for AI testing for similar applications, by utilizing shareable templates, configurations, etc. Filters and tags.”
AI is indeed an unwieldy beast. Even the perfect AI labs have this weak risk management. A platform like Distributional's could reduce testing efforts and maybe even help corporations achieve ROI.
At least that's Clark's opinion.
“Whether instability, inaccuracy or dozens of other potential challenges, it could be difficult to discover AI risks,” he said. “If teams don’t perform AI testing properly, they risk AI applications never making it to production. Or, in production, they risk these applications behaving in unexpected and potentially harmful ways without these problems being visible.”
Distributional is just not the primary to bring technologies to marketplace for studying and analyzing the reliability of an AI. Kolena, Prolific, Giskard and Patronus are amongst the numerous AI experimentation solutions in the marketplace. Tech giants like Google Cloud, AWS, and Azure also offer model evaluation tools.
Why should a customer select Distributional?
Well, Clark claims that Distributional – which is near commercializing its product range – offers a “whiter” experience than many others. Distributional handles installation, implementation, and integration for purchasers and provides AI testing troubleshooting (for a fee).
“Monitoring tools often concentrate on high-level metrics and specific outliers, providing a limited sense of consistency but no insight into broader application behavior,” Clark said. “The goal of Distributional's testing is to enable teams to define the specified behavior for every AI application, confirm that it still behaves as expected in production and through development, detect, when that behavior changes, and determining what must be further developed or fixed to get back to a stable state.”
With recent Series A funding, Distributional plans to expand its technical team with a concentrate on UI and AI research engineering. Clark said he expects the corporate's workforce to grow to 35 people by the top of the 12 months as Distributional begins its first wave of corporate deployments.
“We have secured significant funding in only one 12 months since our founding and, despite our growing team, are positioned to capitalize on this tremendous opportunity over the subsequent few years,” Clark added.
Andreessen Horowitz, Operator Collective, Oregon Venture Fund, Essence VC and Alumni Ventures also participated in Distributional's Series A. To date, the San Francisco-based startup has raised $30 million.