Distributional raises $19M to automate AI model and app testing

Distributional raises $19M to automate AI model and app testing 

Distributional is the AI testing platform founded by Scott Clark, formerly the general manager of AI software at Intel. It raised an additional $19 million in Series A funding led by Two Sigma Ventures. Clark was inspired to build Distributional after experiencing firsthand AI testing pains in his stints both at Intel and Yelp.

As the value of AI applications continues to grow, so does associated operational risk,” Clark said. “Our platform empowers AI product teams to proactively identify, understand, and mitigate risks related to AI deployment before they escalate in production environments.

Read Also: Zoom’s custom AI avatar tool may come with risks In 2025

Clark’s involvement with Distributional comes after Intel acquired SigOpt, a model experimentation and management platform he co-founded. After the acquisition, he assumed a series of roles in Intel, his last role being VP and GM of its AI and supercomputing software group. However, he has been constantly thwarted by problems in AI monitoring and observability.

Because AI inherently is nondeterministic – that is, it gives out different results from the same input data – bugs in an AI system are much harder to detect than in traditional software. An even more dismal note, however: a Rand Corporation survey last year found that over 80% of AI projects eventually fail, and generative AI has proved particularly troublesome; Gartner Inc. predicts that one-third of those deployed by 2026 will be abandoned.

Read Also: Writer Launches Palmyra X 004: Next-Gen Generative AI Model with Tool Calling Capabilities

Clark said that one way to maintain a good methodology is having statistical tests behind them, which check various properties of the data. “AI needs to be tested continuously and adaptively across its life cycle to catch any shift in behavior,” he said. Distributional is making the task of AI auditing much easier by applying methods developed while working with enterprise customers through SigOpt. The service can automatically generate any number of statistical tests tailored to the needs of developers and show the results on an easy-to-read dashboard.

From here, test “repositories” can be collaborated on, failed tests prioritized, and tuned accordingly. Distributional can be installed either on-premises or via a managed service, and integrates seamlessly with common alerting and database tools.

Read Also: Uber to launch AI assistant and EV Options to Boost Electric Vehicle Use

“We give organisations deep visibility into what, when, and how AI applications were tested and how these practices have changed over time,” Clark added. “Our platform delivers a repeatable testing process for similar applications with shareable templates, configurations, filters, and tags.”

It is well-known even to leading AI labs that testing challenges surrounding AI are legion and often have no adequate system in place to manage risks. The platform developed by Distributional may, therefore, completely ease the burden of testing and enable businesses to see a return on investment.

Read Also: Google’s AI bots want you to check out their podcast

“Instability, noise, and many other problems can obscure AI risks,” Clark warned. “If teams don’t test their AI correctly, they risk that their applications never go to production or, if they do go to production, behave unpredictably and potentially harm, while lacking visibility into those problems.”

Although Distributional is not the first organization to offer the technology for AI reliability analysis, the company varies from others because it offers a more personalized “white glove” service. The organization offers an installation, implementation, and integration for its clients while also offering their clients’ troubleshooting support concerning AI testing.

Read Also: Elon Musk’s xAI Takes Over OpenAI’s Headquarters: A Dramatic Turn in the AI Battle.

“Most monitoring tools focus on high-level metrics and specific outlier instances, giving little insight into what broader application behavior looks like,” Clark said. “The goal of our testing is to help teams determine the desired behavior of any AI application, confirm that it’s behaving as intended both in production and development, detect any behavioral changes, and identify the adjustments needed to recover stability.”

With the funds Distributional has now received from this Series A funding round, the company intends to strengthen its technical team, specifically in areas of user interface and AI research engineering. Clark believes the workforce should reach 35 employees by the end of the year when the company begins its first phase of enterprise deployments. So far, it has raised $30 million to date, and its investors include names like Andreessen Horowitz, Operator Collective, Oregon Venture Fund, Essence VC, and Alumni Ventures.