“For me personally, it has been 100% for two+ months now, I don’t even make small edits by hand,” Cherny wrote in a post on X responding to AI researcher Andrej Karpathy. “I shipped 22 PRs (pull requests) yesterday and 27 the day before, each one 100% written by Claude.”
The comments echo earlier remarks by Anthropic CEO Dario Amodei at the World Economic Forum earlier this month, where he noted that some engineers at his company have stopped writing code themselves and instead rely on AI models to generate it while they focus on editing. At Davos, Amodei predicted that the industry may be just six to twelve months away from AI handling most or all of software engineering work from start to finish.
While those within the industry do have motivations to hype up their own tools, there is a growing consensus that the industry has already been fundamentally changed by the rise of AI coding tools.
Cherny, for one, believes these figures will continue to climb though, and that other companies will start to get to similar levels of AI code generation soon. “I think most of the industry will see similar stats in the coming months—it will take more time for some vs others,” he wrote. “We will then start seeing similar stats for non-coding computer work also.”
Anthropic’s tools have become a favorite of software engineers over the last few years. But the release of Claude Code has resonated with both coders and non-coders and sparked a viral moment for the company that hasn’t been seen since ChatGPT’s debut. After users pointed out that Claude Code was more of a general-purpose AI agent, Anthropic created a version of the product for non-coders, launching Cowork, a file management agent that is essentially a user-friendly version of the coding product. Cherny said his team built Cowork in approximately a week and a half, largely using Claude Code itself.
Even before the public frenzy, Cherny says the tool was making waves within their own company.
“Somewhere around a year ago…we had this idea that the model was powerful enough that we could use it for a different kind of coding…we started to try out internally, and it just immediately took off,” Cherny told Fortune in an interview last week. “I have never had this much joy day to day in my work, as I do right now, because essentially all the tedious work, Claude does it, and I get to be creative. I get to think about what I want to build next.”
Cherny said he also uses Claude Code for various admin aspects outside of coding, including project management tasks like automatically messaging team members on Slack when they haven’t updated shared spreadsheets.
“Engineers just feel unshackled, that they don’t have to work on all the tedious stuff anymore,” he said.
The rise of AI-generated code has significantly impacted the software industry. Many Big Tech companies have been open about the fact that AI models are writing significant amounts of their code. But the automation of much of the coding process has also raised questions about the future of software engineering roles, particularly entry-level positions that have traditionally served as training grounds for the profession.
Tech companies argue that rapid adoption of AI coding tools like Claude Code and GitHub Copilot will democratize coding, allowing those with little to no technical skills to build products by prompting AI systems in natural language. But, while it’s not definitive that the two are causally linked—and there are other factors impacting a job’s downturn—open roles for entry-level software engineers have indeed declined as the amount of code written by generative AI has ramped up.
The shift is already changing how Anthropic approaches hiring. Cherny said his team now hires mostly generalists rather than specialists, since many traditional programming skills are less relevant when AI handles implementation details.
“Not all of the things people learned in the past translate to coding with LLMs,” Cherny wrote. “The model can fill in the details.”
While Cherny emphasized the productivity gains and creative freedom that AI coding tools provide, he also acknowledged that the technology is still developing. According to Karpathy’s assessment, models can make “subtle conceptual errors,” over-complicate code, and leave dead code around. Despite the limitations, engineers like Cherny are confident that AI-generated code quality will only continue to improve.



