NewsBite

Business urged to ensure workers avoid ‘unhealthy’ AI relationships

Governments need to regulate AI, but the direct users need to actively question and engage with the issues.

Mary Mesaglio, managing vice-president at Gartner, was in Australia this week for the firm’s IT symposium/Xpo. She leads Gartner’s executive leadership dynamics team and is focused on helping enterprises to transform, innovate and change their culture.

She spoke to The Deal about integrating AI into the workplace.

Who’s going to be in charge – the human or the AI?

Well, I think there’s a wider question of who’s in charge every time we’re interacting with technologies? This isn’t new, it hasn’t just started with generative AI, it’s happened throughout our history with machines.

We used to consider machines to be our tools, mostly. We anthropomorphise them – name your car, name your boat – but they were essentially tools. Now what’s happening is they’re turning into teammates, they’re turning into lots of things.

Machines have gone from what they can do for us to what they can be for us, and what they can be for us is all sorts of things, including unhealthy relationships.

So we can have a relationship with a machine that says it’s my assistant, it’s the one that does all the drudge work, it’s my adviser, it’s a consultant, it’s a therapist.

The data also includes the machine as manipulator, the machine as liar …

We need to get better at recognising that when you’re interacting with a machine that is essentially conversational, where you talk and the machine talks back, and if that machine is generating that response on the fly, which is what generative AI does, then you’re entering into a relationship with it.

And so that involved the same values-based discussions that you would have with anyone you’re entering into a relationship with: Is it healthy? Is it unhealthy? What are the dependencies, what are the rules of engagement, and so on, which then leads to the question of who’s in charge?

That is essentially another way of asking which decisions are you comfortable delegating to machines and which decisions are you not comfortable delegating to machines?

We’ve done some surveys … in general, consumers are comfortable delegating things that are going to increase their convenience. But of course, that’s also not new, that’s what we’ve always sort of traded – the surveillance price tag and the privacy price tag to get more convenience and remove drudgery.

We’re very happy to do that; we’re less happy to do things like have the machine decide about a performance evaluation, or do something that, whether it’s true or not, is perceived to have some ineffable human judgment quality that could never be replaced entirely by machine.

Do we assume there are some ineffable human qualities? Are there human
qualities that can never be replaced?

Humans get this stuff wrong all the time; forecasting is really hard, predicting is really hard. That being said, I think the unexamined assumption that there’s a bunch of ineffable human qualities that a machine could never replicate is a really dangerous assumption.

Examined, we may find out that’s true but what is turning out to be more true is (there are) situations where, as the human augments the machine, the machine indeed augments the human.

If you look at one of the areas where people tend to believe the machine could never replace a human in terms of empathy or mental health or therapy …. all the unwritten, invisible, instinctive stuff that’s going on in a therapist’s office (that) could never be replicated by a machine – that’s already proving to be untrue.

So, for one thing, there’s something called digital disinhibition, which is when certain people feel more comfortable telling their deepest darkest truths to a machine than to a human because they feel the machine won’t judge them.

There’s been some research done on trauma victims (who say) ‘well, look, the trauma came from a human, you expect me to sit at 3pm on a Tuesday in the office of some other human whom I’ve never met, and tell them my deepest secrets.

That’s how it all went wrong the first time, so I’m not doing that’.

In other words, there’s a cohort of people who are likely most in need of help and therapy, but least willing to get it from a human therapist, who might be more comfortable or more uninhibited and more open to help from say an AI therapist.

Can you see growth in that area?

Yes, especially if you consider the statistics in most countries – developed and not – about whether there’s enough human help … You look at, say, India and there are just not enough qualified professionals to go around for all of the mental health issues that need to be treated.

Then you start saying, how could we augment that, is there a humane, safe, ethical, empathetic, effective therapeutic way to augment that?

When we did some interviews with different organisations that are providing solutions, there was never a categorical – the human could only do this or the machine could only do that.

It was more – the machine is good at some things and the human is good at others and together we’re better than either one separately.

So this is in the form of interactive, online programs?

Mostly what I’ve seen are chatbots and they aren’t always generative AI. The big difference between generative AI and AI is that generative AI is generating a response on the spot, it’s creating a response just for you on the spot, and it’s a huge amount of training data to do that.

The solutions that I’ve researched were not generative, they had a closed AI system with very specific escalation procedures (if someone suggested they were suicidal, for example).

Can you talk about your behavioural science work?

I come into this with a very specific perspective, and this is my soapbox – I work with executive teams on transformation. And they tend to be very rigorous about the non-emotional side of change.

So we have these OKRs (objectives and key results), we have these goals and we’re going to achieve them at this time, and we’re going to make this much money or save this much money … and we have time frames, we have goals, we have ways of measuring them. And then they go: The emotional side of the whole thing, it’s just kind of mushy; I remember to email the stakeholders at 5pm on a Friday and hope that’s good enough, and I’ll just stand up at a town hall and I’m going to line up all the rational arguments for why we need to do this change and then expect people to do it …

My message is: Most corporate teams, in fact most teams in the public sector and the private sector are categorically ignoring three domains (psychology, neuroscience and behavioural economics) in order to … get people to do things in a new way. That’s detrimental in the extreme and becomes an even bigger conversation when you add machines.

People are very curious (about the machine) and yet they’re much less curious about the (human). If we want a really great future, together with AI, we’d better get really good at studying the humans.

Should governments be intervening in development of AI? Can they really
intervene anyway?

Yes, governments need to create regulatory frameworks in the same way they do for the financial system, in the same way they do for nuclear arms ... So we need a lot more regulation and we have the same problems that exist in other areas.

If you graduated when I graduated, which was 1998 and you were top of your class, and you were going to go into finance, and the Securities and Exchange Commission offered you a job and Goldman Sachs offered you a job. First of all, there’s an order of magnitude difference in what you’d be paid and, second of all, there’s an order of magnitude difference on the prestige that goes with each one of those things.

So the best brains would go to the top banks. What does that mean? That means the regulators are by definition going to be one step behind those developing collateral debt obligations or whatever they are developing.

So there’s always just the asymmetry of playing catch-up, like cops and robbers; you know, I develop a better radar and you develop better radar detector than I developed. So if you look at how many AI researchers are working on what’s called capability versus what’s called safety, the number working on the first is much greater than the number of researchers working on the second. (On safety) only government has a big role to play because there has to be someone without a profit motive who is trying to do the right thing.

There’s tons happening (from governments) in the regulatory environment. But if you ask me, should we just wait for them, my answer is absolutely not because employees are going to be using public generative AI systems. So that’s your first source of worry. The difference between using a public generative AI system like ChatGPT, and using a private system where you are the one in control and you can put down some rules, (is a) big difference.

So don’t wait for regulation – that’s too late. But also from a positioning standpoint, you need to be positioning yourself. Where are you going to play? Are you going to put generative AI in front of your customers? If you’re the New York Times, are you going to let it write an article? So you have to be able to take a position before waiting for regulatory responses just because the technology progresses much more quickly than the regulatory environment does.

Helen Trinca
Helen TrincaThe Deal Editor and Associate Editor

Helen Trinca is a highly experienced reporter, commentator and editor with a special interest in workplace and broad cultural issues. She has held senior positions at The Australian, including deputy editor, managing editor, European correspondent and editor of The Weekend Australian Magazine. Helen has authored and co-authored three books, including Better than Sex: How a whole generation got hooked on work.

Add your comment to this story

To join the conversation, please Don't have an account? Register

Join the conversation, you are commenting as Logout

Original URL: https://www.theaustralian.com.au/business/the-deal-magazine/business-urged-to-ensure-workers-avoid-unhealthy-ai-relationships/news-story/d32bcdef5b4ebacb94f1f15237d1d119