Without proper regulation, AI could overtake online spaces
What is often described as a conspiracy theory could be a reality if action isn’t taken.
Internet
Don't miss out on the headlines from Internet. Followed categories will be added to My News.
In the past few years, a conspiracy theory called the dead-internet theory emerged on social media forums, eventually spreading to larger publications such asThe Atlantic.
The idea is that the majority of content online is not generated by humans but by bots and artificial intelligence, commanded by algorithms to cater to you.
This would mean interactions you have on the internet are not often with other humans, but with thinking and reacting machines.
Of course, this is not true. But with a recent report stating that nearly half of online traffic is automated, it could be truer than we first thought.
Cybersecurity firm Imperva stated in a 2023 report that in 2022, 47.4 per cent of online traffic was made by bots, which were divided as ‘good’ and ‘bad’.
‘Good’ bots could be software that improves search functions such as Google, whereas ‘bad’ bots can emulate humans and connect to websites with malware (malicious software).
This was a 5.1 per cent increase in automated traffic from 2021, which was at 42.3 per cent.
During this time, billionaire Elon Musk bought the social media platform media Twitter and rebranded it as X.
Shortly after, studies noticed a surge in bot activity on the site, such as during the first US Republican primary debate this year.
With bot activity on the rise, some suggest the far-fetched conspiracy theory could become more likely, if steps to regulate and verify AI aren’t taken.
Digital marketing strategist Prosper M. Taruvinga said that he expected government regulation on AI sooner rather than later, due to the large amount of kids and teenagers with access to the internet.
“It’s only a matter of time until some regulatory bodies start seeing that this is not good for humans,” he said. “Especially when kids start getting involved.”
Mr Taruvinga said in the meantime, it was a matter of individuals noticing the difference between technology and reality, while governments catch up with necessary policy.
“Right now, I think we’ve got discernment,” he said. “You can tell when somebody … doesn’t sound human, because you can’t tweet a hug.”
But Lambros Photios, CEO of software development company Adaca, said that he was concerned the speed of AI’s advancement is growing faster than our ability to recognise it.
As an example, he mentioned that AI software ChatGPT currently has an estimated IQ of 155.
“I think in the short term, we need to just get a bit smarter,” he said, but added that it was a short term solution that may only last “for the next 12 to 18 months; just due to the [speed] AI is progressing.”
Mr Photios said he believed big tech would be the field pushing for safety from bots, rather than federal legislators.
“Those larger groups like Apple and Google have historically created technologies to prevent people, like you and I from being affected by scammers – and it hasn’t been at the instruction of government,” he said.
“Cybersecurity has been something historically, I’d probably say four or five plus years ago, that was left up to the consumer. You know, ‘go get your Norton Antivirus’, ‘go get your McAfee’, ‘get these antivirus technologies for your computer, and you’ll be protected’.
Mr Photios said that now larger tech companies such as Google, Microsoft and Apple are considering cybersecurity as a requirement for their products.
“When it comes to big tech, they’re all needing to continue to innovate in this space, simply to keep up with each other,” he said. “So the great thing about the competitiveness of these groups is they’re going to continue to advance the feature sets around cybersecurity, to protect us as consumers.”
But as these protections are still developing, Mr Photios said it was important to have a plan to protect yourself from scams and AI impersonating friends and family.
“A larger scale cybersecurity strategy hasn’t kicked in yet to actually prevent these bots from making their way through,” he said.
Jocelyn Brewer, cyberpsychologist – a psychologist trained in understanding our relationship with technology – said that teaching people to be aware of their safety online should be about education rather than restricting access.
“The focus with regard to technologies has traditionally been a protectionism approach,” she said. “Do less, avoid, or ‘stop’ mentality, rather than to deeply understand and participate intentionally in digital spaces.”
Ms Brewer said that because of this, people don’t have a clear guideline for teaching humans how to responsibly operate in online areas.
“Rather than teach this (too expensive!) and develop responsible technology use policies, governments create phone ‘bans’ at schools (cheap option!), where the opportunity to drive safely is removed and pushed into home contexts,” she said. “Parents also, often, have no strong skills in this either”.
“We have lots of media attention on teens and screens and impacts of social media on mental health, but we have not seriously paused to consider how to support young people (or in fact people of any age) to participate well in these spaces.”
Originally published as Without proper regulation, AI could overtake online spaces