But the bots are still with us. Officially defined as software applications that run automated, repetitive tasks, bots are still swimming in the digital ether, and they’re a key aspect of the artificial intelligence (AI) revolution that is threatening to undo the internet as it’s been known since the mid-1990s.
This bot-driven inflation may be feeding into a broader tech and AI investment bubble. As companies report rapid user growth and engagement, investors chase the next big thing, and result is a market environment reminiscent of the dot-com era, where hype and inflated metrics risk overshadowing real business fundamentals.
History suggests that markets eventually correct when reality catches up to inflated expectations. Several factors point to a similar reckoning for AI and the bot problem. Recognition of fake metrics is one. As awareness grows about the scale of bot-driven inflation, investors and analysts could grow more skeptical of headline user numbers and engagement stats. New regulations are beginning to address the economic incentives behind bot-driven manipulation.
Regulating bots on the internet has become a critical focus for governments in response to their growing presence in commerce, social media, and consumer interactions. Bots can be used for both legitimate and malicious purposes: assisting with customer service, but also spreading misinformation, generating fake reviews, scalping tickets, or manipulating public opinion. The U.S. government mainly does this through the Federal Trade Commission (FTC).
The FTC is the leading federal agency addressing deception and unfair practices involving bots, especially those affecting consumers and commerce. In 2024, the FTC issued a final rule prohibiting fake and AI-generated consumer reviews and testimonials, which applies to both traditional and AI-powered bots that generate misleading content or endorsements online.
Businesses can also face civil penalties for buying, selling, or disseminating fake reviews or endorsements, whether authored by bots or humans. The rule aims to ensure transparency in online marketplaces and curb deceptive practices.
From Congress, there’s the BOTS Act (Better Online Ticket Sales Act), enacted in 2016 and strengthened by executive order in 2025, that specifically targets the use of automated bots to circumvent controls on ticket purchases for concerts and events, often used by scalpers. The FTC enforces this law, which makes it illegal to use bots to bypass security or purchasing limits when acquiring event tickets. This could be thought of as the “Taylor Swift” law, as fans found, to their displeasure, during her record-setting Eras Tour when new tickets disappeared in seconds, gobbled up by bots.
The FTC also regularly issues business guidance calling for transparency and accuracy about AI chatbots and avatar services, warning against misleading consumers through these technologies. The agency advises companies to clearly disclose when users are interacting with bots, ensure bots do not misrepresent capabilities, and avoid using bots to manipulate or deceive consumers.
Some states, such as California, have passed laws requiring bots to identify themselves when attempting to influence a voter or consumer. Other states have introduced similar bills modeled after California’s “Bolstering Online Transparency Act,” though federal preemption and cross-border challenges remain.
As bot-driven metrics are exposed, companies with inflated user numbers may see their valuations fall, especially if they can’t demonstrate real, sustainable growth. The market may consolidate around companies with proven, human-driven engagement and revenue, while those reliant on artificial metrics struggle or fail. Expect increased demand for third-party verification of user and engagement data, as well as more robust bot-detection and filtering in analytics.
Then again, bots have been a feature of computing for over half-a-century and they’ve just grown more and more plentiful over time. Bot-driven inflation of internet statistics may just become an inevitable part of digital life.
For this story, Fortune used generative AI to help with an initial draft. An editor verified the accuracy of the information before publishing.