"With AI, it isn't just children's emotional well-being that's at risk - it's also their cognitive development." Photo / 123rf
"With AI, it isn't just children's emotional well-being that's at risk - it's also their cognitive development." Photo / 123rf
AI tools can hinder cognitive development in students. Parents are essential to fostering responsible use.
Artificial intelligence is starting to appear in many aspects of teaching and learning. Spurred by the hopes that these tools will improve and personalise children’s learning, the large commercial AI labs are hard at work:Google rolled out 30 new education tools and features on Gemini in June; in late July, OpenAI introduced ChatGPT’s student tutor study mode.
Parents should be very wary about children having unfettered access to a new digital technology. We saw social media wreak havoc on young people’s emotional states soon after they debuted more than 20 years ago. With AI, it isn’t just children’s emotional wellbeing that’s at risk – it’s also their cognitive development. Parents can’t afford to wait for someone else to protect their children. They are, like it or not, the first line of defence and oversight.
Humans are pros when it comes to cognitive offloading, meaning using tools to free up mental processing space and avoid thinking. Why use a map when GPS can navigate? Students behave the same way when there’s a tool like ChatGPT or Gemini readily available to think for them.
But when it comes to learning, the acts of thinking and struggling are what fosters critical thinking skills. Brains, like bodies, develop as they are used. That’s especially important for students, whose brains are still maturing.
Some AI tools are carefully designed for education and can help children follow their curiosity, fill in learning holes and help with learning differences. Khan Academy’s AI tutor, Khanmigo, uses vetted education content to coach students on mastering new material without quickly giving them the answer. OpenAI claims study mode in ChatGPT works similarly.
But these tools stop being helpful when they start doing the thinking for children, which is how many young people are using AI. When students put their essay prompts or problem sets into regular ChatGPT and it spits out perfect work, they are shortcutting their learning. If the training wheels on a child’s bike kept the rider upright and pedalled and steered automatically, the child would not likely learn to ride. When students use Gemini or DeepSeek to do their history homework for them, that’s what’s happening.
This is what researchers at MIT recently found when they tested how AI affected writing skills. They split university students ages 18 to 39 into three groups: one wrote with ChatGPT from the start; the second wrote on their own but could use Google search; and the third group was not allowed to use any tools. Later, all of the students revised their writing using ChatGPT to help.
Those who wrote with ChatGPT from the beginning exhibited the worst writing quality and motivation; and as shown from brain activity measurements, parts of their brain associated with learning were less active. They struggled to revise their writing because it was never theirs to begin with. Participants who drafted their work unaided performed best. Given that even well-educated university students are at risk, we should be even more, worried about children who have yet to fully develop their thinking skills.
OpenAI's own policies require parental consent for children 13 to 18 to use ChatGPT, which, if enforced, would increase parents' awareness of their children's AI use. Photo / Getty Images
Learning is hard. Seventh-grade English teachers don’t instruct their students in the art of essay writing in hopes that they’ll create high art. Rather, students learn to organise their thoughts, evaluate evidence, form an argument and articulate a thesis by writing the essay. When AI helps students short-circuit this process, critical thinking skills may fail to develop. In the words of one student participating in research for the Brookings Global Task Force on AI in Education, “If you are letting someone else do the work for you, you are not learning.” Other research not yet peer reviewed suggests that frequent cognitive offloading to digital devices may account for the recent decline in student IQ levels.
Schools are scrambling to figure out how to manage teaching and learning in the new AI landscape. Some are restricting or banning the use of AI, while others have begun to incorporate it more into the lessons. Overall, however, less than 20% of the nation’s teachers reported their school had a formal AI policy, according to a June survey.
But school policy won’t be able to cover everything. On Snapchat, for instance, over 150 million people have used the My AI tool, which can write an essay and do math problems. As one high school senior in Washington, DC, told Brookings: “A lot of schools, including mine, they blocked ChatGPT, but people will go on their phones and use Snapchat AI or Meta AI with like Instagram.”
As we’ve written in our book, parents are already in the dark about their children’s engagement in school, despite their best efforts to stay apprised. (Grades, for instance, give only a partial picture of one’s learning.) AI is making this worse. Researchers found that 22 to 26% of parents of students in secondary school believe that their children use generative AI for education-related purposes, yet other studies suggest the use among secondary school students may be closer to 70%. As one education nonprofit has learned in its surveys, young people admit to using AI a lot, but they refrain from talking to adults about it because they sense their fear, and worry they will be judged.
Rectifying this parental awareness gap is paramount, since our research also shows that families are as influential as teachers and peers in helping young children and teens engage deeply in learning. But parents cannot foster a nation of engaged learners alone. When rolling out AI literacy programmes, which should be vetted by education experts, schools should target parents as well as students. There are materials, such as the AILit Framework, that educators can use to help teach students how to best use AI for their own learning and avoid its negative trappings. Students’ families could be included in these programmes, too.
Parents should demand that commercial AI companies implement and enforce age-verification strategies, similar to what Nebraska and Britain are requiring from social media companies and some other websites. OpenAI’s own policies require parental consent for children 13 to 18 to use ChatGPT, which, if enforced, would increase parents’ awareness of their children’s AI use. But as of now, children easily get around this requirement by entering a false birthday, or use a different chatbot like My AI on Snapchat.
If we don’t help our children to use AI wisely – to elevate ideas, gain skills and build new knowledge – we risk a whole new level of learning loss: a nation of compliant and unmotivated young people who have not developed the muscles to struggle productively, think, work and contribute to our communities. This is the opposite of the human creativity and problem-solving required to navigate the opportunities and pitfalls of our new AI age.