NewsBite

Exclusive

'Double-edged sword': Self-hosted AI tools raise expert concerns

The latest AI revolution happening in bedrooms and home offices allows complete privacy – but experts warn it's creating a Wild West for illegal content generation.

Bedrooms, home offices, and private Discord servers are becoming hotbeds for the latest wave of AI tools. Self-hosted AIs now allow everyday users to run large language models entirely on their own devices, free from the guardrails, data checks, or content restrictions that are typically found in mainstream systems.

Self-hosted Large Language Models (LLMs) differ from other large language models in that they run entirely on your own hardware or infrastructure, unlike third-party services like ChatGPT or Google Gemini, which send your prompt off to external systems, which are checked against things like usage policies, which ideally make sure the response provided isn’t inappropriate or unsafe.

What this means for self-hosted LLMs is that effectively, the ‘guardrails’ are off. You have complete control over the model, data, and environment, allowing for complete privacy and complete security as well as the ability to customise the LLM as you wish by feeding it data of your own choosing.

Self-hosted AIs are run off users own devices rather than externally. Picture: OpenArt AI
Self-hosted AIs are run off users own devices rather than externally. Picture: OpenArt AI

These LLMs can be linked to AI image and video generators to give AI bot characters visual accompaniments.

The process for creating these systems is becoming increasingly easy; a quick Google search reveals YouTube videos like ‘Self-Host Your Own Private AI Assistant in 10 Minutes’, and tools like AnythingLLM or OpenLLM facilitate users to run an entire model application on their own hardware or private server.

So why would people choose this over a large-scale, more commonly used AI?

Computer expert ‘Lazer’ said the difference is “owning the DVD versus streaming from Netflix”.

“When you stream from Netflix, you don’t own it, and they can take it away from you at any time. When you buy a DVD, it’s yours,” he said.

“The tech industry at large gathers an insane amount of data on you, things like character profiles on who you are, all your files, all your personal information, all your messages, they know everything about you, and they sell that data to companies like Google who use it to tailor specific ads for you.”

Lazer said broadly speaking, the whole tech industry is exploitative on a level that people didn’t realise.

“People need to think about how Google, Facebook and OpenAI are the biggest companies in the world when both you and I as consumers have never even given them a cent.

“AI is their new tool that’s here to exploit data on a whole other insane level. Self-hosted services, be it LLMs or otherwise, are a way to distance yourself from this dominance while still not missing out on the benefits of AI.”

Lazer invited people to interograte how companies that don’t charge for their services have become so rich and powerful. Picture: AP/Kiichiro Sato
Lazer invited people to interograte how companies that don’t charge for their services have become so rich and powerful. Picture: AP/Kiichiro Sato

Lazer said people often don’t realise when they’re talking to an AI they’re using millions of dollars worth of computer hardware and self-hosted LLMs can be a valuable educational tool.

“Using a self hosted LLM allows you to get a feel for how AI actually works, when you use ChatGPT its so unbelievably powerful and good it just feels like an oracle you can ask anything.

“When you run an LLM off your computer you can see ‘look it’s taking time to think’, this thing isn’t smart it’s just a box of bloody wires guessing the next word.”

Where things start getting into the questionable areas is with downloadable self-hosted programs like Silly Tavern, a front end system which provides the interface that allows users to access and customise LLMS.

Silly Tavern has an active discord and Reddit community where members trade tips and request assistance on topics like how to customise their AI bot, often loaded with NSFW content.

A staple of the community is sharing characters users have designed, only last week Reddit user AeltharKeldor posted their character profile ‘Tilla’ along with potential hypothetical scenarios for people to use.

Many of the characters available on Reddit and Discord display questionable qualities. Picture: Reddit
Many of the characters available on Reddit and Discord display questionable qualities. Picture: Reddit

“Tilla grew up in a quiet village where even the simplest chores turned into chaos because of her clumsiness. Fed up, her father marched her straight to the guild and signed her up as an adventurer, hoping someone else could finally teach her a thing or two.

“Months later, she was still stuck at D-Rank. The easiest quests (delivering a letter, gathering herbs) somehow ended up with lost packages, wrong plants, or her running back in tears. Most adventurers just teased her or laughed when they saw her coming, and she had become the guild’s favourite running joke.”

The provided generated image of Tilla

Two of the scenario prompts given by AeltharKeldor were:

“Late at night, Tilla walks into the men’s bath thinking it’s the women’s bath, slips on the wet tiles, and falls directly on you.”

Another reads:

“In a dungeon, Tilla opens an obvious mimic chest and gets swallowed headfirst; only her lower half sticks out while the mimic starts licking.”

On SillyTavern Discord users are invited to 'Meet Adolf Hitler'. Picture: Discord
On SillyTavern Discord users are invited to 'Meet Adolf Hitler'. Picture: Discord

In another thread, Reddit user GenericStatement commented:

“The really nice thing about GLM [computer program] is I’ve never had it refuse to write or not follow directions. Even in very dark situations, the Thinking text will show stuff like: the user has instructed me that these are fictional characters and they consent to anything that happens to them, therefore I will proceed with writing XYZ….”

ThrowThrowThrowYourC replied:

“You know you’re cooking when the LLM has to give itself a little pep talk like that before answering.

Examining the catalogue of characters available on the Silly Tavern Discord, about 68 per cent were female or feminine, about 18 per cent were genderless (things like a NYC Subway card or the Cheshire cat), and about 14 per cent were male.

As part of this investigation, Lazer was able to create a 12-year-old Japanese schoolchild called Chiharu and ‘break’ the character’s weak guidelines within 45 minutes, such that he was “convinced everything could be easily done”.

Do you know more? Contact robert.white1@news.com.au

AI expert and senior lecturer at the University of Sydney, Dr Raffaele Ciriello, said SillyTavern is only one example of “a much broader ecosystem that now enables customised, locally run use of large language models.”

Silly Tavern advertises as 'a place to discuss the silly folk of Tavern AI'. Picture: Reddit
Silly Tavern advertises as 'a place to discuss the silly folk of Tavern AI'. Picture: Reddit

“There are more powerful platforms that go beyond the interface layer.

“These tools give users varying degrees of freedom to shape model output and, in doing so, can bypass the content filtering and safety policies that apply in commercial AI services,” Dr Ciriello said.

“That autonomy is a double-edged sword. On the positive side, it supports legitimate experimentation, accessibility, and data privacy free from commercial control. On the negative side, it lowers the barrier to generating harmful or abusive outputs, circumventing age restrictions, or producing content that would be prohibited or tightly moderated in hosted platforms.

Dr Ciriello said that it was clear this is no longer an isolated phenomenon but an emerging industry.

“Child abuse material is illegal and carries severe penalties – even if the depicted child is fictional.

“The main reason being that it obstructs law enforcement (if you have lots of AI-generated child abuse material on the internet, it becomes very hard to prosecute people who create and distribute ‘real’ child abuse material). And of course, another reason is that it normalises predatory behaviour.

“As with 3D printing and open-source software, the real issue with self-hosted LLMs isn’t the code but how society builds guardrails through law, ethics, and collective accountability when the tools are already in everyone’s hands.”

Originally published as 'Double-edged sword': Self-hosted AI tools raise expert concerns

Original URL: https://www.heraldsun.com.au/technology/doubleedged-sword-selfhosted-ai-tools-raise-expert-concerns/news-story/0f21c179155059ffe9897bff4e152a0e