- Exclusive
- Technology
- AI
It’s new and risky, but Gen Z workers are ‘all in’ on AI
By David Swan
Young Australians are ‘all in’ on generative AI and are embracing tools like ChatGPT in the workplace at speed, according to new research, amid calls for workplace leaders to introduce guardrails for the nascent technology.
The study of 1200 Gen Z Australians found that 58 per cent of them are already using and embracing tools like ChatGPT and Google’s Gemini in the office, and almost all (93 per cent) are not worried about it threatening their job.
Young workers are ramping up their use of the chatbots despite the highly publicised risks associated with the technology, including ‘hallucinations’ – faulty or misleading responses – and security risks associated with feeding sensitive workplace data into the large language models.
The research was conducted by Hatch, an online jobs marketplace described as ‘Seek for Gen Z’. Hatch CEO Adam Jacobs, a co-founder of The Iconic, said Gen Z’s keenness to embrace AI technologies should be viewed as an asset to businesses.
“It’s very natural for young people, who are digital natives, to adopt new technologies,” he said.
“All employers are sitting at the start of a major wave of transformation brought on by AI and the next generation can help position them to ride that wave, rather than being threatened by it.
“While there’s a lot that we still need to learn about the way AI is going to change our world, it’s comforting to hear that the next generation is optimistic about it.”
Hatch head of AI Dr Arwen Griffioen said workplaces needed to establish guidelines and ensure that employees were trained in how hallucination, bias and error could be detected and mitigated.
“Managers should be enthusiastic about their teams using the technology where it is helpful and safe,” Griffioen said. “Creating a culture of healthy scepticism and accountability ensures that employees feel empowered to use the tools that elevate them while building critical thinking and analysis skills.
“Legal and security teams should be auditing the provider agreements to ensure that company data and IP are not retained for model improvements or tuning. Establish employee accounts on the tools of choice for use in workplace contexts and emphasising data security in onboarding and recurrent training.”
Law student Monique Buksh, a paralegal at Zed Law, saw ChatGPT regularly at work, as well as a tool called Dashworks, which is integrated with her firm’s legal systems.
“I use it every day. It’s instrumental in helping me draft things more efficiently and identify oversights that I might not have been able to identify myself, given my lack of experience working as a lawyer,” she said.
“AI is really helpful for me in being that ‘eagle eye’, and it’s a great tool to use internally. But inadequate safeguards could result in potential breaches of confidentiality, which would jeopardise client integrity and expose the firm to legal liabilities. So it’s a great tool but it also doesn’t replace the training we receive.”
Earlier this year, a US lawyer was suspended from the bar and fired from his law firm after admitting to using artificial intelligence to file court cases. The lawyer confessed to a judge that he had used ChatGPT to submit a motion in civil court.
Buksh said generative AI was adept at collating information and making summaries, but that its limitations should be recognised.
A Senate committee in November recommended that artificial intelligence chatbots ChatGPT, Google’s Gemini and Meta’s Llama should be deemed “high risk” and subjected to mandatory transparency, testing and accountability requirements.
After nine months of hearings, the committee issued 13 recommendations including sweeping EU-style legislation that would introduce guardrails against high-risk AI use cases across the economy.
Varad Chaudhari, a Gen Z investment intern with venture capital firm Rampersand, said he used ChatGPT and other chatbots like Claude and Gemini just as much as he used Google. Like Buksh, Chaudhari said the AI chatbots were particularly helpful given he was still early in his career.
“Every day I use it to quickly learn something,” he said. “I’ll be on a call with a founder who will tell me something I haven’t heard before, and it’s awesome to learn 60 or 70 per cent of something really quickly.
“There are also so many ways to make what you do faster. ChatGPT really speeds up my job, which is massive. As a junior person, you’re trying to make sure you’re working hard and not missing something important. I find it incredibly useful.”
Chaudhari said he thought company executives should try the tools themselves so they could then have a clearer idea about how to regulate them or set up guardrails.
Amid a cost of living crisis, Hatch’s research also found that more than two-thirds of young workers believed they were being underpaid, while only half of Gen Z workers felt engaged at work.
The start-up earlier this year raised an oversubscribed $7 million funding round and has embarked on a poaching spree. It has landed senior talent from other start-ups, hiring Griffioen from Culture Amp as its head of AI, Tom Mansfield from Dovetail as chief revenue officer, Ashwin Ramesh from Canva as chief technology officer and Pete Binns from Dovetail as lead product designer.
Hatch is also halfway through rebuilding its platform from scratch, and once finished, it will use AI technology to match people to jobs based on their skills and values.
“Who you hire is 90 per cent of the success of a business,” Jacobs said.
“I built The Iconic for about nine years, and during that time, I probably interviewed around 2000 people and hired around 600 or 700 people. And what I learned is that what’s on a resume is often misleading. It’s actually someone’s underlying skills and traits that matter.“
The Business Briefing newsletter delivers major stories, exclusive coverage and expert opinion. Sign up to get it every weekday morning.