The company will start formally notifying users of the change on October 7. There’s no opt-out: If you don’t want your chatbot conversations influencing your ads, the only option is not to use Meta AI at all.
Emily Bender, a linguist at the University of Washington and co-author of the widely cited “Stochastic Parrots” paper on the risks of large-language models (LLMs), told Fortune the company is blurring a dangerous line.
“They’re already farming your clicks and posts to target ads. Now they’re mining your conversations with chatbots,” Bender said. “The obvious next concern is whether the chatbot itself will start nudging people to disclose information that makes them more targetable.”
It’s surveillance under the guise of personalization, Bender argues, with unprecedented abilities to extract personal details from users.
“Before, Meta’s systems watched who you connected to and what your communities were doing. Now it’s directly: What are you saying to the company?” she said. “And, of course, they can combine that with all the other data they already have.”
Bender says Meta is capitalizing on what she calls the “illusion of privacy.” People often confide in chatbots about things they’d never post publicly, lulled into a sense the AI is a neutral listener.
“There’s this illusion of privacy, when in fact what you’re doing is you’re serving up this data to a company,” she said.
Harris offered a cheery example: if you ask Meta AI about planning a family vacation, you might see more family-travel Reels in your feed, along with hotel ads. Those interactions, whether typed into a phone or processed through microphones on Meta’s Ray-Ban glasses, will now be treated as new advertising signals.
Meta told Fortune its AI work is currently focused on “building a great consumer experience,” stressing that ads will not appear inside chatbot conversations themselves.
The company also pointed to existing privacy tools: People can “reset or correct an AI” through settings, and data retention follows Meta’s broader Privacy Policy.
Meta did not immediately respond to Fortune‘s request for comment.
Bender calls these chatbots “a scourge” and warns their framing as friendly, always-available companions can be harmful.
She worries the more Meta ties AI assistants to its ad business, the stronger the incentives become to keep users talking — not to help them, but to maximize engagement.
“It probably also adds to the financial incentives for Meta to keep people chatting with the chatbots — to optimize on engagement, which is one of the vectors for harm,” she said.
Meta countered that its protections for young people extend to AI interactions.
“With Instagram Teen Accounts, teens are defaulted into the strictest setting of our sensitive content control so that they’re even less likely to be recommended sensitive content – and teens under 16 can’t change this setting without a parent’s permission. This is no different for interactions with AI at Meta,” a spokesperson told Fortune.