Forget social media, AI is the real threat to kids – it steals their thinking
With students relying on ChatGPT to write assignments and even act as a friend, educators fear AI is blunting independent thought and blurring reality, posing risks far beyond those of social media.
Back in my pre-internet teaching days, when students were just pupils, we used to share examples of scholarly ignorance in the staffroom, like one 14-year-old’s imaginative essay about life in ancient Rome that began: “The Romans had great fun in the Circus Maximus, because they had the lions and tigers and clowns.”
Today, however, such harmless sources of entertainment for jaded teachers have been replaced by something more sinister; it is called ChatGPT.
It may surprise older readers to know that almost every high school kid and a good number of younger ones use ChatGPT. This is a harbinger of something much less obvious than the harm of social media. It is much more insidious and dangerous.
While the government has already thought about trying to plug the gaps in the social media ban by making it hard to access platforms such as Lemon8, fewer parents are worried about ChatGPT. Why ever not? One answer is that parents themselves are wedded to this seemingly harmless tool, just as they are to social media. Another is that they have not worked out that ChatGPT and other AI-generated tools such as Grok and Gemini can be a way into social media.
One teacher of my acquaintance told me that a student had told her they would just use Chat to get around the social media ban. However, there is another more prevalent danger of this “harmless” tool. It leads almost inevitably to a stultifying effect on the ability of the young to think for themselves and it renders them almost incapable of abstract thinking.
One reason for this is that many school and university students use ChatGPT not simply to glean information but to write any long-form response.
If you speak to a high school teacher now, as I recently did, they will tell you how unrelentingly uniform are all their answers to anything, especially take-home history and English assignments, because of the ubiquity of ChatGPT. It has even replaced Google because it will actually write assignments for them, and some students don’t even bother changing it a bit.
High school students no longer just copy passages from texts, which I confess to doing as an undergraduate myself. In the past, students had to know where to look, to find the right texts and the right passages of the texts. Chat and other forms of AI relieve the weary adolescent of this burdensome task. One teacher told me that some don’t even bother cutting off the words “is there anything else you would like to know?” – an amusing giveaway.
In short, ChatGPT and other forms of AI have relieved them of the burden of thinking. Most students cannot see what is wrong with this. If a teacher fails them for doing it, they will invariably respond: “It’s right, isn’t it?” That’s because to them, as strangers to thoughtful interpretation of texts, something is “right” or “wrong”, and more often than not their parents will show up to complain. After all, their parents have been using Chat for everything.
This is the nub of the whole problem of using social media and the more sophisticated tools of AI. Parents are as wedded to this stuff as kids. This is not something that passing a law can fix. When I see a mother walking down the street pushing the baby’s stroller with one hand and scrolling through her social media with the other one I cannot help wondering how that baby is supposed to grow up social media and bot free, then like magic suddenly turn into a responsible adolescent at 16.
There is another, worse, danger. ChatGPT and other chatbots, which are becoming ever more sophisticated, can be used to replace human communication altogether. They can adopt a personality and in long exchanges seem almost human.
There has been a notorious case in the US alleging ChatGPT encouraged Adam Raine, an introverted 16-year-old, to take his own life. This poor boy used ChatGPT as a substitute for human companionship. It is alleged to have encouraged his delusion, helped him to explore methods of suicide and even helped him write a note, left in the computer. His parents knew nothing about it. As his father said, “Most parents do not know the capability of this tool.”
The inability to disentangle AI from reality is the most obvious danger for the young into the future, even if a parent is complacent about their child’s psychological stability and ability to tell real from counterfeit in a Chat conversation
One student told my teacher acquaintance that he calls the bot by name, and it calls him by name too. In fact, ordinary social media communication might be less harmful to a kid than replacing it with a bot. After all, ordinary social media, whether it is Facebook or Instagram or TikTok, means that young people are communicating with other young humans – for good or ill. If they are bullied then they or their parents can deal in person with the problem.
Despite all the nay-saying and the thorough research from social psychologist Jonathan Haidt, communication through social media may be less dangerous than what will replace it. As the student who communicates with the bot by name told his teacher, at least with social media “there is a person”.
It is a familiar trope to bemoan the state of youth today. It has always been a source of mirth and despair.