Chatbots like ChatGPT and Google Gemini are useful for brainstorming but pose risks of errors. Photo / Getty Images
Chatbots like ChatGPT and Google Gemini are useful for brainstorming but pose risks of errors. Photo / Getty Images
Chatbots like ChatGPT, Google Gemini and Claude can be great for brainstorming or helping with difficult writing such as obituaries. But they’re also a minefield of potential goofs and embarrassments.
Just look at the publicly posted feed of conversations with the Meta AI chatbot. Some chatters seemed unaware that theywere posting online their cringe-inducing dating questions, advice for tax evasion and a request for AI help finding a misplaced phone cord.
Please don’t do that, or any of these other AI mistakes:
1. Be careful what you share, part one
A special warning about the Meta AI chatbot app: There’s a “Share” button at the top right corner of your chat. If you hit that option and then “Post,” your chat may be funneled to a Facebook-like public feed called Discover with a stream of everyone’s AI conversations.
Some people appear to accidentally be using Meta AI like a public personal diary, or butt dialling with the app. Be careful. (This week, Meta added a warning if you’re about to post your AI chat online, though it didn’t appear consistently in the app.)
If you’re intentionally posting your AI chats publicly – why? Ask yourself whether you’d post the same thing on your Facebook page.
It’s also not clear why Meta thought it was a good idea to create a stream of everyone’s chatbot musings.
Be cautious about sharing personal information with chatbots, as it may be saved or leaked.
2. Don’t develop feelings for chatbots
Chatbots are designed to sound human and hold conversations that flow like a text gab fest with an old friend. Some “companion” chatbots can role play as a romantic partner, including sexual conversations.
But never forget that a chatbot is not your friend, lover or a substitute for human relationships.
Chatbots can sound human, leading to scams in romantic or investment chats. Photo / 123RF
If you’re lonely or uncertain in social situations, it’s okay to banter or practise with AI. Just be sure to take those skills into the real world.
You can also try asking a chatbot to recommend local meetups, organisations for people in your age group or similar life stage or advice on making personal connections.
3. Recognise when you’re talking to AI
AI is so good at mimicking human chatter that scammers use it to strike up conversations to trick people into sending money.
For safety, assume that anyone you meet only online is not who they say, particularly in romantic conversations or investment pitches. If you’re falling for someone you’ve never met, stop and ask a family member or friend if anything seems off.
4. Know why chatbots spew weird stuff
Chatbots make things up constantly. They’re also designed to be friendly and agreeable so you’ll spend more time using them.
The combination sometimes results in obsequious nonsense, like our Washington Post colleague who found OpenAI’s ChatGPT invented passages from her own published columns and fabricated why that was happening. (The Post has a content partnership with OpenAI.)
Chatbots can leave users open to embarrassing blunders. Photo / Getty Images
When these oddities happen to you, it helps to know the reason: these are stupid computer errors.
AI companies could program their systems to respond, “This chatbot can’t access that information”, when you ask questions about essays, books or news articles that they aren’t peering into.
Instead, machines might act like a kid who has to give a book report but hasn’t read the book: they fabricate details and then lie when you catch them making stuff up.
An OpenAI spokesperson said the company is “continuously working to improve the accuracy and reliability of our models”, and referred to an online disclosure about ChatGPT’s errors.
5. Don’t just copy and paste AI text
If you use a chatbot to help you write a flirty message to a dating app connection, a wedding toast or a cover letter for a job, people can tell when your words come verbatim from AI. (Or they can paste your text into an AI detector, although these technologies are flawed.)
Avoid these mistakes when using AI chatbots. Photo / 123rf
Roman Khaves, CEO of AI dating assistant Rizz, suggested treating chatbot text as a banal first draft. Rewrite the text to make it sound like you, including specific details or personal references.
6. Be careful what you share, part two
Most chatbots will use at least some information from your conversations to “train” their AI, or they might save your information in ways you’re not expecting.
Niloofar Mireshghallah, an AI specialist and an incoming Carnegie Mellon University professor, was surprised that when you tap the thumbs-up or thumbs-down option to rate a chatbot reply for Anthropic’s Claude, that starts a process of you consenting to the company saving your entire conversation for up to 10 years.
Chatbots can fabricate information; always verify and personalise any text they generate. Photo / 123RF
Anthropic said it’s transparent about this process in the feedback box and in online questions-and-answers (Q&As).
Before confiding in chatbots, imagine how you’d feel if the information you’re typing were subpoenaed or leaked publicly.
Mireshghallah said she’s unnerved by the prospect of people working for chatbot companies reviewing conversations, which she said happens sometimes.
At minimum, Mireshghallah advised not entering into chatbots your personally identifiable or sensitive information like Social Security or passport numbers. (Use fake numbers if you need to.)