In order to set up the new option, an adult ChatGPT user sends a request by email to their child. If the invitation is accepted, the adult can then take actions like deciding if the teenager can access ChatGPT’s voice mode or its ability to generate images, or choose whether the chatbot can reference prior conversations. The tools also allow parents to determine whether they want their child to use a restricted version of the chatbot that is meant to show less content related to topics like dieting, sex and hate speech.
If ChatGPT detects a teenager may be in mental distress, a human reviewer will determine whether to send an emergency alert to a parent. These alerts can be set to come via email, text messages and notifications from the ChatGPT app.
Jonas said the alerts are meant to give parents enough knowledge about a potentially harmful situation to have a conversation with their teenager while still respecting the child’s privacy and autonomy. OpenAI will not share a teenager’s ChatGPT conversations with their parents, she said.
In addition to the parental controls, San Francisco-based OpenAI has said it is working on software to predict a user’s age, which the company plans to use to guide how ChatGPT responds to those who are under 18.
Sign up to Herald Premium Editor’s Picks, delivered straight to your inbox every Friday. Editor-in-Chief Murray Kirkness picks the week’s best features, interviews and investigations. Sign up for Herald Premium here.