NewsBite

‘We don’t say please to Excel’: Experts reveal hidden dangers of forming personal relationships with AI

Workers forming dangerous emotional bonds with AI tools face serious career consequences as experts reveal the hidden risks of treating chatbots like human colleagues.

Australians are being warned against forming relationships with the AI tools they use at

work, as more employees treat the technology as a real-life colleague.

The trend to regard AI as if it was human is raising alarm bells, with experts fearing it could

jeopardise workplace security, privacy and productivity.

“My rule of thumb is simple – treat AI like a power tool, not a colleague,’’ said Lucid

Software chief people officer and associate general counsel Kat Judd.

Why you shouldn’t be polite to AI

Anthropomorphising AI, or giving it human-like qualities, was becoming a major workplace

concern, said Gartner HR research and advisory vice president Aaron McEwan, who cited

instances of employees giving the technology a nickname and chatting to it casually.

He said even saying a simple “please’’ or “thank you’’ to a chatbot had implications, with

the friendliness of AI interactions creating a false sense of intimacy, leading workers to

over share and trust systems without understanding the risks.

Gartner HR research and advisory vice president Aaron McEwan. “We don’t say please and thank you to Excel, so why do we think we need to be polite to AI?’
Gartner HR research and advisory vice president Aaron McEwan. “We don’t say please and thank you to Excel, so why do we think we need to be polite to AI?’

“We don’t say please and thank you to Excel (spreadsheet software), so why do we think we

need to be polite to AI?’’ Mr McEwan said.

“We know that AI is prone to errors but it’s also a master of language so it feels very human

to us and it’s so influential and compelling that you tend to believe it.

“But being too connected and too believing can be incredibly damaging to careers.’’

With chatbots available 24/7 and designed to be “agreeable and respond exactly the way we

want’’, Mr Gartner feared workers were seeking advice from AI rather than their manager.

“AI has an incredible talent to make people (feel) seen and heard in a world where that’s

rare,’’ he said.`

Pseudo-relationships

People and behaviour expert Mark Carter agreed workers were likely going to AI for

approval and recognition that may not otherwise be forthcoming from their workplace.

“Workers are seeking validation from AI because they’re not getting that validation from

their peers and manager – they’re using (AI) as an emotional substitution for something that’s

missing from their culture at work,’’ he said.

“But that that connection (to AI) is misplaced. While AI may mimic feelings, it doesn’t

really feel anything.

“(Workers) are building a relationship with something that’s not real. It’s a pseudo-

relationship.’’

Mr Carter said the tendency to trust and rely on AI was causing workers to lose the ability to

think for themselves.

In a recent Victorian murder trial, a senior lawyer was forced to apologise after filing AI-

generated submissions that included false quotes and made-up judgments.

Mr Carter said while workers used AI in a bid to save time, having to verify its output could

take even longer.

“You can’t outsource critical thinking to AI or you just become a mimic of a computer,’’ he

said.

“The more you lean into AI, the less your personal competence becomes because you are not

actually connecting to the information (that AI has generated).’’

‘Jobs on the line’

Lucid Software chief people officer and associate general counsel Kat Judd.
Lucid Software chief people officer and associate general counsel Kat Judd.

Ms Judd said while casually chatting with AI and giving it a nickname was “not inherently a

problem’’, she was concerned workers were treating the technology too casually and

overlooking its risks.

“The goal isn’t to fear AI but to use it professionally and within clear guardrails,’’ she said.

“The biggest risks arise when workers forget those boundaries – sharing confidential data

that compromises safety or leaning too heavily on unverified outputs that undermine

productivity.

“In a landscape where liability for AI-generated errors is still being defined, exposing an

employer to legal, financial or reputational harm can put jobs on the line.’’

Staying at arm’s length

Communications professional Michelle Tran said she made sure she only ever used

employer-approved AI tools at work and kept the technology at “arm’s length’’.

Communications professional Michelle Tran is mindful of verifying content produced by AI.
Communications professional Michelle Tran is mindful of verifying content produced by AI.

“The moment you start treating a tool like a teammate, the boundaries get blurred,’’ Ms Tran

said.

“It’s easy to share more than you should or forget to consider your privacy and security.’’

Ms Tran also verified names, numbers and claims in AI-generated content before sharing it

with others.

“I apply my own judgment – fact-check, refine and add the context (AI) can’t know’’, she

said.

Originally published as ‘We don’t say please to Excel’: Experts reveal hidden dangers of forming personal relationships with AI

Add your comment to this story

To join the conversation, please Don't have an account? Register

Join the conversation, you are commenting as Logout

Original URL: https://www.weeklytimesnow.com.au/technology/we-dont-say-please-to-excel-experts-reveal-hidden-dangers-of-forming-personal-relationships-with-ai/news-story/5e869a6e58ba365c5ae323abda80d08c