Before Fei-Fei Li helped launch the modern era of artificial intelligence, she was running a dry-cleaning business in suburban New Jersey.
Even as she navigated the manicured campus at Princeton, Li joked she was the “CEO” of her parents’ shop. As the only one who spoke English, she balanced physics problem sets with “all the business”: answering phones, managing inspections, talking to customers, and handling billing. When she left for Caltech to begin her PhD, the job didn’t end: She kept running the dry-cleaning business remotely until halfway through graduate school, she told Bloomberg.
The experience, she says, taught her resilience: the quality she now considers essential in both science and life.
“Science is a nonlinear journey,” she told Bloomberg. “Nobody has all the solutions. You have to go through such a challenge to find an answer.”
At Princeton, Li gravitated toward physics, drawn to its audacity, the idea that you could ask the biggest possible questions about the universe. Eventually, her own “audacious question,” as she puts it, shifted: What is intelligence? How does it arise? And could machines learn it? That curiosity carried her to Caltech, where a single realization would end up transforming the entire field of AI almost by accident.
At the time, computer-vision research was floundering. Algorithms weren’t working, and no one knew why. Li began looking outside computer science—toward psychology, linguistics, and how humans organize the world—and noticed something obvious that the field had overlooked: Humans learn from huge amounts of experience. Computers were trying to learn from datasets with just a few hundred images.
“The scientific datasets we were playing with were tiny,” she told Bloomberg.
Li wasn’t trying to revolutionize the field, she was just following a hunch that everyone else thought was misguided.
“Pre-ImageNet, people did not believe in data,” Li later said. “Everyone was working on completely different paradigms in AI with a tiny bit of data.”
So, dragging along impatient graduate students, she set out to build what didn’t exist. The result was ImageNet: 15 million labeled images across 22,000 categories, organized using insights from human cognition.
That was the turning point.
That project, which she thought of at the time as simply the natural next step in her research, is the reason she’s now known as the “godmother of AI.”



