A few weeks ago, I became briefly famous for the wrong reasons.
I had not expected this. I had expected, maybe, curiosity. What I got instead felt like something older and more personal than a debate about journalism ethics — more like the look you get when a coworker figures out a shortcut and doesn’t share it.
I’ve been trying to understand the reaction ever since. The person who finally gave me a framework for it wasn’t a media critic or a journalism professor. She was a neuroscientist who has spent 30 years wiring AI into human beings.
Ming recruited teams of UC Berkeley students to use AI tools to predict real-world outcomes on Polymarket — the forecasting exchange where professionals with real money bet on geopolitical events, commodity prices, and economic indicators. The task was specifically designed to be impossible to game from memory: no amount of studying would tell you what a barrel of oil would cost in six months. She wanted to see not whether AI helped, but how humans used it — and what that revealed about the humans themselves.
She also put EEG monitors on some participants.
She calls this group the automators. They were the majority.
Ming calls them cyborgs. They outperformed the best individual humans in the study and they outperformed the best AI models running alone. They were roughly on par with Polymarket’s expert markets — professionals with millions of dollars on the line.
Specifically, Ming isolated four traits crucial for cyborg success: curiosity, fluid intelligence, intellectual humility, and perspective-taking. Ming notes that these same traits, measured in children, predict lifetime earnings and all-cause mortality rates. “There’s a reason these things are predictive of life outcomes, because they change how we engage with the world.”
Ming identified four traits that reliably predicted whether someone became a cyborg or an automator. They are worth naming carefully, because they matter more than anything else in this story.
Ming notes that these same four traits, measured in children, predict lifetime earnings and all-cause mortality rates. They are not incidental or peripheral qualities. They are the deepest measures of human capability we have — and they are almost entirely absent from the hiring systems and educational frameworks that currently sort people into careers.
When Ming described the cyborg profile to me, I told her (with as much intellectual humility as possible) that it sounded like me. In terms of journalism, I consider the AI to be handling a lot of the well-posed work — what does this transcript say, how does this connect to that data — while I try to handle the ill-posed work: what is the real story here, what does this mean, why does it matter.
My process isn’t complicated. I use AI to generate first drafts from my notes, to find angles I might have missed, to synthesize large amounts of material quickly. Then I check everything — every quote against the original transcript, every claim against the source. I ask the AI what I’m missing. I push back when it goes in a direction I don’t recognize. I try to stay in control of the ideas. And it’s true, I have been thinking of myself as more and more of a cyborg for months now.
“I think most interesting problems in the world are ill-posed,” Ming said, adding that she sees a world struggling to adjust because it’s been built for much easier problems. “We built a whole employment system that’s based on people getting some degree of an education to answer well-posed questions that nowadays are better answered by a machine.” This could explain much of the backlash — and much of the scramble within the C-suite, as boards ask McKinsey leaders like Smaje to suddenly pivot their companies from well-posed to ill-posed problems.
But the ambient dread — the kind that fills comment sections and manifests as professional outrage when a colleague admits to using a tool differently than expected — that, she argues, is not really about the technology. It is the specific anxiety of watching someone else gain leverage you haven’t figured out how to gain yourself. A cyborg colleague doesn’t just work faster. They implicitly change what the job is, and in doing so, indict the way you’ve been doing it.
Other people I spoke with for this piece had each, in their own way, run into the same wall.
West Monroe calculated that AI added the equivalent of 320 full-time employees’ worth of output in six months without adding headcount, according to Greenstein. He said that when he showed people what was possible, some lit up. Others shut down — not because the technology was hard, but because it made their sense of professional self suddenly feel unstable.
Poleg argued that for 50 years, the economy’s center of gravity has been moving more toward producing intangible rather than tangible things, meaning “more inequality, more uncertainty, more professions, fewer places to hide, like fewer normal jobs where you can just learn something, and that knowledge will remain useful for the next 20, 30, 40 years, and you’ll just do the same thing.” AI is just the thing that made this more visible, somehow — even though it has existed for decades already and it somehow took on a new appearance over the last four years.
The stakes beneath the culture war are significant enough to warrant separation from it.
Explaining that she sees demand for both well-posed (low-pay, low-autonomy) and ill-posed (high-pay, high-creativity) labor, she said that she sees the labor supply for the latter as highly inelastic. Just because there’s more demand for creative problem solvers doesn’t mean workers will get more creative. “We’re acting as though demand automatically produces supply,” she said. “There’ll be lots of jobs. Most of them will be mediocre and have little autonomy. And the ones that people really want will become even more esoteric, and the competition for that elite labor will go up.” After all, she added, there is no six-week job retraining program for cyborgs.
Critics are not wrong to be worried, Ming said. They were wrong about what they were worried about. The automators in her study weren’t bad people making lazy choices — they were doing what most humans do when handed a powerful tool and no framework for using it well. They optimized for the appearance of productivity rather than its substance. The machine lowered their cognitive load, and they accepted the gift without asking what it cost them.
Ming has been arguing for a generation that education systems need to change — away from passive absorption of well-posed answers, toward active cultivation of exactly these traits. Nothing has changed. She is not sanguine about the timeline. But she is still running experiments, still building companies, still asking what she is missing.
That last part, I think, is the whole point.
The backlash I received was, in its way, a gift. Not because it was fair — I don’t think it was — but because it was clarifying. The argument was never really about whether I fact-checked my quotes or disclosed my process. It was about something older: the anxiety of a professional class watching the tools of their trade become accessible to more people, in more configurations, with less gatekeeping than before.
The EEG data suggest that getting mad about it is, neurologically speaking, the equivalent of watching TV.
For this story, Fortune journalists used generative AI as a research tool. An editor verified the accuracy of the information before publishing.



