“I think about it a bit like Levi’s in the gold rush,” said Segev. Just as every gold digger needed a good pair of jeans, every enterprise company needs to adopt AI securely, he explained.
The company also recently launched a new research lab to help companies get ahead of the fast-growing security risks created by AI. The team studies how data and AI systems actually interact inside large organizations—tracking where sensitive information lives, who can access it, and how new AI tools might expose it.
I must say I was surprised to hear Segev describe the current state of AI security as “grim,” leaving CISOs—chief information security officers—caught between a rock and a hard place. One of the biggest problems, he and Wittenberg told me, is that employees are using public AI tools such as ChatGPT, Gemini, Copilot, and Claude either without company approval or in ways that violate policy—like feeding sensitive or regulated data into external systems. CISOs, in turn, face a tough choice: block AI and slow innovation, or allow it and risk massive data exposure.
“They know they’re not going to be able to say no,” said Segev. “They have to allow the AI to come in, but the existing visibility controls and mitigations they have today are way behind what they need them to be.” Regulated organizations in industries like healthcare, financial services or telecom are actually in a better position to slow things down, he explained: “I was meeting with a CISO for a global telco this week. She told me, ‘I’m pushing back. I’m holding them at bay. I’m not ready.’ But she has that privilege, because she’s a regulated entity, and she has that place in the company. When you go one step down the list of companies to less regulated entities. They’re just being trampled.”
For now, companies aren’t in too much hot water, Wittenberg said, because most AI tools aren’t yet fully autonomous. “It’s just knowledge systems at this point—you can still contain them,” he explained. “But once we reach the point where agents take action on behalf of humans and start talking to each other, if you don’t do anything, you’re in big trouble.” He added that within a couple of years, those kinds of AI agents will be deployed across enterprises.
“Hopefully the world will move at a pace that we can build security for it in time,” he said. “We’re trying to be make sure that we’re ready, so we can help organizations protect it before it becomes a disaster.”
Yikes, right? To borrow from A Few Good Men again, I wonder if companies can really handle the truth: when it comes to AI security, they need all the help they can get on that wall.
With that, here’s more AI news.



