WitnessAI is building guardrails for generative AI models


Generative AI makes stuff up. It can be biased. Sometimes, it spits out toxic text. So can it be “safe”?

Rick Caccia, the CEO of WitnessAI, believes it can.

“Securing AI models is a real problem, and it’s one that’s especially shiny for AI researchers, but it’s different from securing use,” Caccia, formerly SVP of marketing at Palo Alto Networks, told TechCrunch in an interview. “I think of it like a sports car: having a more powerful engine — i.e. model — doesn’t buy you anything unless you have good brakes and steering, too. The controls are just as important for fast driving as the engine.”

There’s certainly demand for such controls among the enterprise, which — while cautiously optimistic about generative AI’s productivity-boosting potential — has concerns about the tech’s limitations.

Fifty-one percent of CEOs are hiring for generative AI-related roles that didn’t exist until this year, an IBM poll finds. Yet only 9% of companies say that they’re prepared to manage threats — including threats pertaining to privacy and intellectual property — arising from their use of generative AI, per a Riskonnect survey.

WitnessAI’s platform that intercepts activity between employees and the custom generative AI models that their employer is using — not models gated behind an API like OpenAI’s GPT-4, but more along the lines of Meta’s Llama 3 — and applies risk-mitigating policies and safeguards.

“One of the promises of enterprise AI is that it unlocks and democratizes enterprise data to the employees so that they can do their jobs better. But unlocking all that sensitive data too well –– or having it leak or get stolen — is a problem.”

WitnessAI sells access to several modules, each focused on tackling a different form of generative AI risk. One lets organizations implement rules to prevent staffers from particular teams from using generative AI-powered tools in ways they’re not supposed to (e.g. like asking about pre-release earnings reports or pasting internal codebases). Another redacts proprietary and sensitive info from the prompts sent to models, and implements techniques to shield models against attacks that might force them to go off-script.

“We think the best way to help enterprises is to define the problem in a way that makes sense, for example, safe adoption of AI, and then sell a solution that addresses the problem,” Caccia said. “The CISO wants to protect the business, and WitnessAI helps them do that by ensuring data protection, preventing prompt injection and enforcing identity-based policies. The chief privacy officer wants to ensure that existing — and incoming — regulations are being followed, and we give them visibility and a way to report on activity and risk.”

But there’s one tricky thing about WitnessAI from a privacy perspective: all data passes through its platform before reaching a model. The company is transparent about this, even offering tools to monitor which models employees access, the questions they ask the models and the responses they get. But it could create its own privacy risks.

In response to questions about WitnessAI’s privacy policy, Caccia said that the platform is “isolated” and encrypted to prevent customer secrets from spilling out into the open.

“We’ve built a millisecond-latency platform with regulatory separation built right in — a unique, isolated design to protect enterprise AI activity in a way that is fundamentally different from the usual multi-tenant software-as-a-service services,” he said. “We create a separate instance of our platform for each customer, encrypted with their keys. Their AI activity data is isolated to them — we can’t see it.”

Perhaps that’ll allay customers’ fears. As for workers worried about the surveillance potential of WitnessAI’s platform, it’s a tougher call.

Surveys show that people don’t generally appreciate having their workplace activity monitored, regardless of the reason — and believe it negatively impacts company morale. Nearly a third of respondents to a Forbes survey said that they might consider leaving their jobs if their employer monitored their online activity and communications.

But Caccia asserts that interest in WitnessAI’s platform has been and remains strong, with a pipeline of 25 early corporate users in its proof of concept phase. (It won’t become generally available until Q3.) And, in a vote of confidence from VCs, WitnessAI has raised $27.5 million from Ballistic Ventures (which incubated WitnessAI) and GV, Google’s corporate venture arm.

The plan is to put the tranche of funding toward growing WitnessAI’s 18-person team to 40 by the end of the year. Growth will certainly be key to beating back WitnessAI’s rivals in the nascent space for model compliance and governance solutions, not only from tech giants like AWS, Google and Salesforce but also from startups such as CalypsoAI.

“We’ve built our plan to get well into 2026 even if we had no sales at all, but we’ve already got almost 20 times the pipeline needed to hit our sales targets this year,” Caccia said. “This is our initial funding round and public launch, but secure AI enablement and use is a new area, and all of our features are developing with this new market.”


👇Follow more 👇
👉 bdphone.com
👉 ultraactivation.com
👉 trainingreferral.com
👉 shaplafood.com
👉 bangladeshi.help
👉 www.forexdhaka.com
👉 uncommunication.com
👉 ultra-sim.com
👉 forexdhaka.com
👉 ultrafxfund.com
👉 ultractivation.com
👉 bdphoneonline.com

Leave a Comment

Exit mobile version