NewsBite

AI can threaten your business: here’s what to do about it

Artificial intelligence has huge potential for business, but beware the pitfalls.

Picture: iStock
Picture: iStock

Business leaders are struggling to understand how seriously they should take the latest phenomenon in the world of artificial intelligence: generative AI.

On one hand, it has already displayed a breathtaking ability to create new content such as music, speech, text, images and video, and is currently used to write software, to transcribe physicians’ interactions with their patients and to allow people to converse with a customer relationship management system.

On the other hand, it is far from perfect: It sometimes produces distorted or entirely fabricated output and can be oblivious to privacy and copyright concerns.

Given the potential of generative AI to improve productivity in many functions that involve cognitive tasks – calling it revolutionary is no hyperbole.

Business leaders should view it as a general purpose technology akin to electricity, the steam engine and the internet. But although the full potential of those other technologies took decades to be realised, generative AI’s impact on performance and competition throughout the economy will be clear in just a few years.

That’s because general purpose technologies of the past required a great deal of complementary physical infrastructure along with new skills and business processes.

That’s not the case with generative AI. Much of the necessary infrastructure is already in place.

Generative AI will also deploy quickly because people interact with these systems by talking to them much as they would to another person.

That lowers the barriers to entry for some kinds of work (imagine writing software by explaining to a large language model, known as LLM, in everyday speech what you want to accomplish).

Consequently, business leaders shouldn’t sit on the sidelines and wait to see how the use of generative AI develops.

HOW WILL GENERATIVE AI AFFECT YOUR COMPANY’S JOBS?

Every company board should expect its chief executive to develop an actionable game plan. Doing so is a three-part process.

First, do a rough inventory of knowledge-work jobs: How many of your people primarily write for a living? How many data analysts, managers, programmers, customer service agents and so on do you have?

Next, ask two questions about each role. The first is, “How much would an employee in this role benefit from having a competent but naive assistant – someone who excels at programming, writing, preparing data or summarising information, but knows nothing about our company?”

The second question is, “How much would an employee in this role benefit from having an experienced assistant – someone who’s been at the company long enough to absorb its specialised knowledge?”

Finally, once your company’s knowledge-work roles have been inventoried and those two questions have been answered, prioritise the most promising generative-AI efforts.

This task is straightforward: Choose the ones with the largest benefit-to-cost ratio.

The purpose is not to identify positions for elimination; it’s to identify opportunities for big productivity improvements – where new digital assistants will be most valuable.

REMEDYING THE ‘CONFABULATION’ PROBLEM

Given the major impact that generative AI promises to have on a wide variety of businesses in the near future, the response to one of its biggest shortcomings – that it can fabricate information – shouldn’t be to avoid the technology. Rather, it should be to safeguard against that danger. Here are ways to do so.

Build multi-level LLMs or combine one with another system: Companies that build LLMs are well aware that these systems confabulate and are working on ways to minimise the problem. One technique is to recognise when a user’s request is not suitable for an LLM’s standard approach, which is to formulate an answer on the basis of associations among all the words and sentences it has been trained on. For such requests, the system takes a different tack.

Supplement the LLM with a human: Users should take an LLM’s output with a grain of salt. For example, marketers using an LLM to generate copy for a website or a social media campaign can look at what the system comes up with and quickly assess whether it’s on target.

Don’t use an LLM: Some tasks are too risky for generative AI to be involved at all. For example, a system that prescribes exactly the right medications 90 per cent of the time but confabulates in one case out of 10 is unacceptably unsafe to be used on its own. It also wouldn’t save physicians any time, because they’d have to carefully check all its recommendations before passing them on to patients.

MITIGATING INVASION OF PRIVACY, INTELLECTUAL PROPERTY
PROBLEMS AND BIAS

If you use a confidential report to help train a generative AI system, bits of the report’s contents might later show up in the response to a prompt from someone who shouldn’t have access to that information. Consequently, it’s important to be clear on the privacy policies of any generative AI you’re using.

In addition to confabulations and privacy concerns, a risk with some LLMs is violation of intellectual property, or IP, rights. Companies may be exposed to legal liability if the generative-AI-produced images they use are found to be in violation of IP laws.

As a result, many organisations are waiting to see how court cases are decided before diving into generative AI. But to encourage immediate adoption, some creators of these systems are shielding customers from IP risk.

One final concern with generative AI, as with most other types of artificial intelligence, is bias. “Garbage in, garbage out” is one of the oldest sayings of the computer era, and it’s true now more than ever. If a machine-learning system is trained on biased data, the results it generates will reflect that bias. So be vigilant as you’re putting generative AI to work.

BE READY TO EXPERIMENT

Over the past few decades, leading organisations have employed the agile method for successfully developing and adopting new information systems. They manage their ­efforts with repeated trials rather than extensive planning. They break projects up into short cycles that can be completed in a week or two, sometimes even less. Members of the project team track progress and reflect on what they’ve learned before starting the next cycle. Often, the whole cycle is an experiment: The goal isn’t so much to build something as to test a hypothesis and gain understanding.

Andrew McAfee is a principal research scientist at the Massachusetts Institute of Technology;
Daniel Rock is an assistant professor of operations, information and decisions at the University of Pennsylvania’s Wharton School; Erik Brynjolfsson is the director of the Stanford Digital Economy Lab.

Copyright 2023, Harvard Business Review/Distributed by NYTimes Syndicate

SPECIAL: AI AT WORK — The Weekend Australian on Saturday

Add your comment to this story

To join the conversation, please Don't have an account? Register

Join the conversation, you are commenting as Logout

Original URL: https://www.theaustralian.com.au/business/the-deal-magazine/ai-can-threaten-your-business-heres-what-to-do-about-it/news-story/66fc3d9d7562597d2dc1a4d6b44e54ff