Briggs highlighted this “93-7” split as something that really surprised him—and a critical error. Organizations are obsessing over the “ingredients”—the models, chips, and software—while ignoring the “recipe,” which includes the culture, workflow, and training required to make the technology work. Briggs compared this tech-heavy approach to “trying to get paella” but ending up with “just cilantro.”
Briggs, who is based near Kansas City but is often back and forth to New York and all around the country, said this 93-7 split really surprised him. “I felt it in my travels, but I hadn’t been able to quantify it,” he said about this ratio. He likened it to every technology wave, when the easiest thing to do is apply the new tech to the way a company has always worked. “This incrementalism is a hard trap to get out of.”
To correct the 93-7 imbalance, Briggs suggested a radical shift in how companies view AI agents. As organizations move from “carbon-based” to “silicon-based” employees (meaning a shift from humans to semiconductor chips, or robots), they must establish the equivalent of an HR process for agents, robots, and advanced AI, and complex questions about liability and performance management. This is going to be hard. He brought up the hypothetical of a human creating an agent, and that agent creating five more generations of agents. If wrongdoing occurs from the fifth generation, whose fault is that? “What’s a disciplinary action? You’re gonna put your line robot…in a timeout and force them to do 10 hours of mandatory compliance training?”
Workers say these unauthorized tools are “easier to access” and “better and more accurate” than the approved corporate solutions. This disconnect has led to a collapse in confidence, with corporate worker trust in gen AI declining by 38% between May and July 2025. The data supports this need for a human-centric approach. Workers who received hands-on AI training and workshops reported 144% higher trust in their employer’s AI than those who did not.
For CEOs and boards, the reluctance to address the cultural shift stems from a deeper fear that today’s investment will be obsolete by next week. Briggs noted that leaders are terrified of committing to a vendor only to face “buyer’s remorse” when a better model releases days later. “CEOs and boards, they’re scared because they don’t want to commit at the wrong time,” he said. It’s easy for them to delay committing to an AI tool because there could be another release next week, or the week after that.
Briggs likened this mentality to trying to time the stock market perfectly, but he argued that this hesitation is “almost like a pre-snap penalty” in sports. He insisted that the fastest path to progress is just getting started on a solution, regardless of how crowded the market is.
The urgency to fix this ratio is compounded by the arrival of “physical AI,” which moves beyond text generation to robotics and drones. Real-world applications are already proving the value of getting the integration right; for example, HPE saw 50% faster reporting from data to decision after deploying Zora AI.
For Briggs, the message to the C-suite is clear: The technology is ready, but unless leaders shift their focus to the human and cultural transformation, they risk being left with expensive technology that no one trusts enough to use. As Briggs warned, “No matter how much traffic there is, the sooner you leave, the sooner you can get there.”



