Five years ago, I predicted that 95% of internet content would be AI-generated by 2025. It’s hard to measure where we are exactly, but we’ll get close to that in the coming years.
But once the novelty fades, no one will care how a video was made. Every media shift goes through this cycle. Painters debated cameras. Print debated digital. Provenance debates burn hot, then cool, as audiences standardize on value. Was it worth my time? Did I learn something? Did it help me decide faster? We’ll stop caring if a video came from an iPhone or a GPU, except for select formats like news or sports.
Until then, things will get very weird, very fast. As production costs collapse, we’ll see an explosion of AI-generated “watchbait”: short, disposable clips optimized for the swipe. Platforms will rush to release AI-native creative tools, give them away, fill feeds, and monetize the engagement loop with ads.
In the enterprise world, the impact of Sora 2 will be slower. Companies don’t trade in eight-second AI slop, they buy measurable business outcomes. To be useful, generative video must do more than shock. It has to drive measurable results: faster creation, lower costs, higher engagement. That means integrating models into full workflows—editing, guardrails, translation, versioning, collaboration, distribution, and analytics.
As video production costs approach those of text, the new competitive unit is clarity, creativity and trust. Our Sora 2 integration moves in that direction—not just a fun demo, but a tool to help teams tell better stories, faster.
The frenzy will pass. The flood will recede. What will remain is the same question every creator has asked for a century: What’s the best way to say what we need to say?
More often than not, the answer will be AI video.
The opinions expressed in Fortune.com commentary pieces are solely the views of their authors and do not necessarily reflect the opinions and beliefs of Fortune.





 
  
  
  
  
  
 