Artificial intelligence may be a promising way to boost workplace productivity, but leaning on the technology too hard may prevent professionals from keeping their own skills sharp. More specifically, it sounds like AI might be making some doctors worse at detecting irregularities during routine screenings, new research finds, raising concerns about specialists relying too much on the technology.
The doctors’ failure to detect as many polyps on the colon when they were no longer using AI assistance was a surprise to Dr. Marcin Romańczyk, a gastroenterologist at H-T. Medical Center in Tychy, Poland, and the study’s author. The results not only call into question a potential laziness developing as a result of an overreliance on AI, but also the changing relationship between medical practitioners and a longstanding tradition of analog training.
“We were taught medicine from books and from our mentors. We were observing them. They were telling us what to do,” Romańczyk said. “And now there’s some artificial object suggesting what we should do, where we should look, and actually we don’t know how to behave in that particular case.”
Romańczyk’s study contributes to this growing body of research questioning humans’ ability to use AI without compromising their own skillset. In his study, AI systems helped identify polyps on the colon by putting a green box around the region where an abnormality would be. To be sure, Romańczyk and his team did measure why endoscopists behaved this way because they did not anticipate this outcome and therefore did not collect data on why this happened.
The real-life consequences of automation atrophying human critical skills are already well-established.
These incidents bring periods of reckoning, particularly for critical sectors where human lives are at stake, according to Lynn Wu, associate professor of operations, information, and decisions at University of Pennsylvania’s Wharton School. While industries should be leaning into technology, she said, the onus to make sure humans are appropriately adopting it should be on the institutions.
“What is important is that we learn from this history of aviation and the prior generation of automation, that AI absolutely can boost performance,” Wu told Fortune. “But at the same time, we have to maintain those critical skills, such that when AI is not working, we know how to take over.”
Similarly, Romańczyk doesn’t eschew the presence of AI in medicine.
“AI will be, or is, part of our life, whether we like it or not,” he said. “We are not trying to say that AI is bad and [to stop using] it. Rather, we are saying we should all try to investigate what’s happening inside our brains, how we are affected by it? How can we actually effectively use it?”
If professionals and specialists want to continue to use automation to enhance their work, it behooves them to retain their set of critical skills, Wu said. AI relies on human data to train itself, meaning if its training is faulty, so, too, will be its output.
“Once we become really bad at it, AI will also become really bad,” Wu said. “We have to be better in order for AI to be better.”