A man wants to know how to help his friend come out of the closet. An aunt struggles to find the right words to congratulate her niece on her graduation. And one guy wants to know how to ask a girl - “in Asian” - if she’s interested in older
Meta users don’t know their intimate AI chats are out there for all to see
Subscribe to listen
Mark Zuckerberg, chief executive of Meta, said that one of the main reasons why people used Meta AI was to talk through difficult conversations they need to have with people in their lives. Photo / David Paul Morris, Bloomberg via Getty Images
Since the April launch, the app’s discover feed has been flooded with users’ conversations with Meta AI on personal topics about their lives or their private philosophical questions about the world.
As the feature gained more attention, some users appeared to purposely promote comical conversations with Meta AI.
Others are publishing AI-generated images about political topics such as Trump in nappies, images of girls in sexual situations and promotions to their businesses.
In at least one case, a person whose apparently real name was evident, asked the bot to delete an exchange after posting an embarrassing question.
The flurry of personal posts on Meta AI is the latest indication that people are increasingly turning to conversational chatbots to meet their relationship and emotional needs.
As users ask the chatbots for advice on matters ranging from their marital problems to financial challenges, privacy advocates warn that users’ personal information may end up being used by tech companies in ways they didn’t expect or want.
“We’ve seen a lot of examples of people sending very, very personal information to AI therapist chatbots or saying very intimate things to chatbots in other settings,” said Calli Schroeder, a senior counsel at the Electronic Privacy Information Centre.
“I think many people assume there’s some baseline level of confidentiality there. There’s not. Everything you submit to an AI system at bare minimum goes to the company that’s hosting the AI.”
Meta spokesman Daniel Roberts said chats with Meta AI are set to private by default and users have to actively tap the share or publish button before it shows up on the app’s discover field.
While some real identities are evident, people are free to able to pick a different username on the discover field.
Still, the company’s share button doesn’t explicitly tell users where their conversations with Meta AI will be posted and what other people will be able to see - a fact that appeared to confuse some users about the new app.
Meta’s approach of blending social networking components with an AI chatbot designed to give personal answers is a departure from the approach of some of the company’s biggest rivals.
ChatGPT and Claude give similarly conversational and informative answers to questions posed by users, but there isn’t a similar feed where other people can see that content.
Video- or image-generating AI tools such as Midjourney and OpenAI’s Sora have pages where people can post their work and see what AI has created for others, but neither service engages in text conversations that turn personal.
The discover feed on Meta AI reads like a mixture of users’ personal diaries and Google search histories, filled with questions ranging from the mundane to the political and philosophical.
In one instance, a husband asked Meta AI in a voice recording about how to grow rice indoors for his “Filipino wife”.
Users asked Meta about Jesus’ divinity; how to get picky toddlers to eat food and how to budget while enjoying daily pleasures.
The feed is also filled with images created by Meta AI but conceived by users’ imaginations, such as one of United States President Donald Trump eating poop and another of the grim reaper riding a motorcycle.
Research shows that AI chatbots are uniquely designed to elicit users’ social instincts by mirroring humanlike cues that give people a sense of connection, said Michal Luria, a research fellow at the Centre for Democracy and Technology, a Washington think-tank.
“We just naturally respond as if we are talking to … another person, and this reaction is automatic,” she said. “It’s kind of hard to rewire.”
In April, Meta chief executive Mark Zuckerberg told podcaster Dwarkesh Patel that one of the main reasons why people used Meta AI was to talk through difficult conversations they need to have with people in their lives - a use he thinks will become more compelling as the AI model gets to know its users.
“People use stuff that’s valuable for them,” he said. “If you think something someone is doing is bad and they think it’s really valuable, most of the time in my experience, they’re right and you’re wrong.”
Meta AI’s discover feed is filled with questions about romantic relationships - a popular topic people discuss with chatbots.
In one instance, a woman asks Meta AI if her 70-year-old partner can really be a feminist if he says he’s willing to cook and clean but ultimately doesn’t.
Meta AI tells her the obvious: that there appears to be a “disconnect” between her partner’s words and actions.
Another user asked about the best way to “rebuild yourself after a break-up”, eliciting a boilerplate list of tips about self-care and setting boundaries from Meta AI.
Some questions posed to Meta took an illicit turn.
One user asked Meta AI to generate images of “two 21-year-old women wrestling in a mud bath” and then posted the results on the discover field under the headline “Muddy bikinis and passionate kisses”. Another asked Meta AI to create an image of a “big booty white girl”.

There are few regulations pushing tech companies to adopt stricter content or privacy rules for their chatbots.
In fact, the US Congress is considering passing a tax and immigration bill that includes a provision to roll back state AI laws throughout the country and prohibit states from passing new ones for the next decade.
In recent months, a couple of high-profile incidents triggered questions about how tech companies handle personal data, who has access to that data, and how that information could be used to manipulate users.
In April, OpenAI announced that ChatGPT would be able to recall old conversations that users did not ask the company to save.
On X, chief executive Sam Altman said OpenAI was excited about: “[AI] systems that get to know you over your life and become extremely useful and personalised”.
The potential pitfalls of that approach became obvious the following month, when OpenAI had to roll back an update to ChatGPT that incorporated more personalisation because it made the tool sycophantic and manipulative towards users.
Last week, OpenAI’s chief operating officer Brad Lightcap said the company intended to keep its privacy commitments to users after plaintiffs in a copyright lawsuit led by the New York Times demanded that OpenAI retain customer data indefinitely.
Ultimately, it may be users that push the company to offer more transparency.
One user questioned Meta AI on why a “ton of people” were “accidentally posting super personal stuff” on the app’s discover feed.
“Ok, so you’re saying the feed is full of people accidentally posting personal stuff?” the Meta AI chatbot responded.
“That can be pretty wild. Maybe people are just really comfortable sharing stuff or maybe the platform’s defaults are set up in a way that makes it easy to over-share. What do you think?”