Studies suggest humans experience shorter attention spans, distorted memories, and shifts in self-esteem due to “brain rot,” or a dependence on low-quality online content. Researchers now say the same phenomenon can affect artificial (AI) models, too.
The study, published on the open-access scholarly article archive, arxiv, has not yet been peer-reviewed.
Contrasting with previous criticism of AI models’ kiss-up tendencies, the study found that when LLMs, including Meta’s open source Llama3 as well as versions of Alibaba’s Qwen LLM, were trained on junk, they were less agreeable. Worse yet, the researchers found that AI brain rot brought out an LLM’s darkest traits, including higher rates of psychopathy and narcissism.
When researchers tried to “heal” the LLMs using higher-quality human-written data through the process of “instruction tuning,” the AI models still had lingering effects and showed a significant gap between the quality of their reasoning compared to their baseline, pre-junk diet.
“The gap implies that the Brain Rot effect has been deeply internalized, and the existing instruction tuning cannot fix the issue. Stronger mitigation methods are demanded in the future,” the researchers wrote.
Because AI models are trained on trillions of data points from across the internet, the researchers warned that LLMs “inevitably and constantly” are exposed to this low-quality content just like humans, which could pose risks for the technology as a whole.
All of this adds up to the potential danger caused by AI models not trained on quality data. A danger that can potentially impact human safety.
The researchers’ recommendation: AI companies need to stop merely hoarding massive amounts of data and focus on the quality of the data being used to train their LLMs. They may also need to conduct routine “cognitive health checks” on the models—or else risk a full-blown safety crisis.
“Such persistent Brain Rot effect calls for future research to carefully curate data to avoid cognitive damages in pre-training,” the researchers wrote.