‘Embarrassing’: The huge downfall to artificial intelligence
AI “assistants” are taking the world by storm, with employers and employees reaping the benefits. However, the pitfalls can be unexpected and damaging.
ANALYSIS
Do you even know what your last email to the boss said?
AI isn’t the Terminator of Arnold Schwarzenegger fame. But it can get you terminated; from your job.
Generative Artificial Intelligence “assistants” are taking the world by storm.
And the workplace isn’t ready.
It’s the ultimate in outsourcing; but who’s responsible for the outcomes?
Employers, and employees, are already reaping the benefits; usually with each not knowing the other is doing it. But the pitfalls can be unexpected, damaging and embarrassing.
Too hungover to submit that client report to management? There’s an AI for that.
Too bored to respond to email #101? There’s an AI for that.
“Fake it until you make it” not working out yet? There’s an AI for that.
AI is already everywhere; it’s listening, transcribing, annotating – and assessing – your meetings.
It’s tracking, comparing – and assessing – your activities.
It’s monitoring, collating – and assessing – your reports.
And, often, it offers advice as a result.
On the whole, research suggests the growing presence of such silicon supervisors is bothering Australian employees.
A survey from the University of Queensland’s Business School has found Australians are among the least comfortable with workplace AI in the world. Especially when it comes to performance reviews, hiring, and firing.
“Only 44 per cent believe the benefits outweigh the risks, and only a quarter believe AI will create more jobs than it will eliminate,” Professor Nicole Gillespie said.
Many users report privacy and cybersecurity risks. And the potential for malicious manipulation, or just plain stupid decisions, is a genuine concern.
Especially because everything about the machine-thinking process is unexplained, inexplicable; or just plain secret.
Including whether or not their job is safe.
AI: Artful Idiots
AI assistants are useful, they’re just over-estimated.
Workplace AI can improve efficiency in standardised, everyday operations. They facilitate faster, better-informed decisions. They identify innovations and assess products and services.
But at the fundamental level, they’re just another tool.
And a tool is only as effective as the hand that wields it.
AI does not understand; instead, it “averages” an internet-worth of words on a subject.
AI does not think, it applies “if this, then that” algorithms generated from preselected data.
AI does not decide, it establishes and applies scores and probabilities based on supplied guidelines.
Modern Generative AI systems combine all three of the above into new algorithms they build for themselves, and as a result, not even their creators know precisely how they work.
And what they do know, they jealously guard through commercial in-confidence, intellectual property, patent laws and nervous users.
AI exists to serve. But is it you?
A harried New York lawyer asked a Generative AI to collate precedents for a court filing, and when it couldn’t find actual examples; it made them up. The lawyer didn’t read the results. The judge, however, did.
Amazon trained an AI to apply a five-star approval category to new job applicants.
Soon, Amazon had stopped hiring female tech staff, without realising it.
Its AI had extrapolated how to identify whether or not an applicant was male or female, it also noticed previous managers hadn’t hired many females.
“If this, then that” self-taught algorithms ensued: Females = bad, therefore minus one star.
Other studies have discovered AI “teaching themselves” racial and class discrimination by judging applicants by their postcodes, or the churches, schools and universities they attended. Some judged parents to be less conscientious employees.
“Most cases of AI-based discrimination won’t be reported. Or maybe even noticed. And that is a big problem,” says associate professor Alysia Blackham of the University of Melbourne.
AI: Avoidable Insanity
Last year, a Melbourne University Law Review published an analysis of Australian workplace laws. It revealed nobody really knows how AI is being integrated into Australian workplaces, but we know it’s rapidly being adopted.
“There are many software tools that use AI to streamline human resource functions – from recruitment to performance management and even to dismissal,” says Ms Blackham.
“But how these are being used is often only revealed when things go really wrong.”
Even then, Australian law, policy and corporate ethics are far behind.
Who is responsible for what AI does?
The owner? After all, it’s a purchased tool, just like any other.
The producer? After all, a tool must do what it claims to do.
The user? After all, it’s not the car’s fault the driver was inattentive.
It’s just that we think AI knows how to think.
And we assume AI will be working in our best interests. Not its creators’.
• Security: How confidential is that confidential client report you ran through AI to read easier? How secret are the details of that corporate deal minuted and summarised by AI? And how much of this material – almost inevitably re-used to further train the AI – can other users recover? We don’t really know how AI companies handle our data.
• Accountability: It looks good. It sounds good. But is it good? It’s the difference between pseudoscience and science. Wellness and medicine. Marketing and experience. Theory and practice. But AI algorithms are hard to unpack – if their owners even let you see them. So, if an AI makes a questionable call, can you justify its conclusion? Can you identify what it considered, or the information it used to reach an outcome? Who is culpable?
• Hallucinations: AI doesn’t “know” anything. What it has is a homogenisation of everything it has been shown. In the case of Generative AI, that’s years’ worth of content scraped from social media and the internet. Writing assistants have thousands, if not millions, of examples of the kind of thing you are writing on call. It doesn’t understand the subject. It just mixes and matches patterns of words that match your request – and to be a little different from the last time it did the same thing. And, sometimes, that source material – or algorithmic mix – can go awry. Especially for unusual questions or subjects.
• Privacy: Where did the data come from to train the AI? Whose experiences – and expertise – have been added to the algorithmic soup? Who owns this? And who is responsible for privacy, confidentiality or even copyright breaches generated by AI?
• Overreliance: It used to work fine. Now it doesn’t. The latest upgrade broke the system! The companies behind commercial Generative AI hold all the cards. It’s their baby. They can teach, refine, adapt, and monetise it however they choose, as often as they choose. Businesses reliant on it for everything from hiring practices to report writing can suddenly discover their processes brought to a halt. Or tweaked in an unwanted way.
AI: Algorithmic Interlopers
The lack of transparency and accountability surrounding artificial intelligence and algorithmic management (AM) has caused the European Parliament to think: “Should the use of AM be permitted at all?”
Modern AI can certainly generate interesting questions.
“Can decisions and actions performed by AI be enforced? Who is liable? With a ‘human in control’, whose interests does that human represent? How can a line be drawn between ‘worker monitoring’ and worker surveillance?
“The answers to such questions could pave the way to AI deployment that can benefit everyone,” the EU Parliament Think Tank assessment reads.
Algorithmic management, it warns, can demoralise – and dehumanise – the workplace.
It can “unduly limit workers’ autonomy, reduce human contact and the ability of workers to discuss their work with managers, or contest decisions that seem unsafe, unfair, or discriminatory,” the report warns.
It adds the effects on staff include the loss of a sense of professionalism, identity and meaningfulness. And workers who feel subject to automated decision-making without transparent processes become suspicious and cynical.
A host of global studies are finding AI is the new office outcast.
Employees fear it will “steal” their jobs and skills, they fear it will “snitch” on their behaviour, and they fear it will unfairly judge their abilities.
But AI, no matter how cold or dispassionate, still needs them.
And it needs them to lift their game.
“AI and worker coexistence require workers’ technical, human, and conceptual skills,” a recent study led by human resources researcher Dr Araz Zirar notes.
“Workers need ongoing reskilling and upskilling to contribute to a symbiotic relationship with workplace AI.”
But in a world where the boss is probably already taking advice from an AI, do they realise they’re really just hiring a mimic machine?
“Existing labour laws, put in place before AI systems came on the scene, do not appear fit to provide meaningful guide rails,” the European Parliament report reads.
“As with any new technologies, tensions arise between two opposing regulatory approaches: strict regulation to safeguard society from potential hazards and minimum regulation to promote the technology’s deployment and innovation.”
Jamie Seidel is a freelance writer | @JamieSeidel