The not-yet-public information was made accessible via the company’s content management system (CMS), which is used by Anthropic to publish information to sections of the company’s website.
In total, there appeared to be close to 3,000 assets linked to Anthropic’s blog that had not previously been published to the company’s public-facing news or research sites that were nonetheless publicly-accessible in this data cache, according to Alexandre Pauwels, a cybersecurity researcher at the University of Cambridge, who Fortune asked to assess and review the material.
After Fortune informed Anthropic of the issue on Thursday, the company took steps to secure the data so that it was no longer publicly-accessible.
Prior to taking these measures, Anthropic stored all the content for its website—such as blog posts, images, and documents—in a central system that was accessible without a login. Anyone with technical knowledge could send requests to that public-facing system, asking it to return information about the files it contains.
While some of this content had not been published to Anthropic’s website, the underlying system would still return the digital assets it was storing to anyone who knew how to ask. This means unpublished material—including draft pages and internal assets—could be accessed directly.
The issue appears to stem from how the content management system (CMS) used by Anthropic works. All assets—such as logos, graphics, or research papers—that were uploaded to the central data store were public by default, unless explicitly set as private. The company appeared to have forgotten to restrict access to some documents that were not supposed to be public, resulting in the large cache of files being available in the company’s public data lake, cybersecurity professionals who analyzed the data told Fortune. Several of the company’s assets also had public browser addresses.
While many of the documents appear to be discarded or unused assets for past blog posts, like images, banners, and logos, some of the data appeared to detail sensitive information.
The documents include details of upcoming product announcements, including information about an unreleased AI model that Anthropic said in the documents is the most capable model it has yet trained.
After being contacted by Fortune, the company acknowledged that is developing and testing with early access customers a new model that it said represented a “step change” in AI capabilities, with significantly better performance in “reasoning, coding, and cybersecurity” than prior Anthropic models.
The publicly-accessible data also included information about an upcoming, invite-only retreat for the CEOs of large European companies being held in the U.K. that Anthropic CEO Dario Amodei is scheduled to attend. An Anthropic spokesperson said the retreat was “part of an ongoing series of events we’ve hosted over the past year” and the company was “developing a general-purpose model with meaningful advances in reasoning, coding, and cybersecurity.”
Among the documents were also images that appear to be for internal use, including one image with a title that describes an employee’s “parental leave.”
It’s not the first time a tech company has inadvertently exposed internal or pre-release assets by leaving them publicly accessible before official announcements.
However, the problem is likely exacerbated by AI coding tools now readily available on the market—including Anthropic’s own Claude Code.



