NewsBite

Advertisement

This was published 1 year ago

Opinion

The world is scrambling to take back control of a risky technology

The sheer speed at which a generative AI industry has developed has governments scrambling to come up with a regulatory response to technology that carries as many risks to society as it does opportunities.

On Friday, Joe Biden hosted a meeting of major AI companies at the White House, where executives from Amazon, Google, Facebook’s parent Meta Platforms, Microsoft and Open AI and others agreed to a number of voluntary commitments.

Regulators want AI-generated content to be watermarked so users know the material has been produced by an AI system.

Regulators want AI-generated content to be watermarked so users know the material has been produced by an AI system.Credit: iStock

Among them were that their companies would test their systems’ security and capabilities before public release, invest in researching the risks to society of their systems and enable external audits of their systems’ vulnerabilities.

The commitments are, at this point, voluntary. The Biden administration is, however, committed to developing regulatory and legislative frameworks for AI, and the US Federal Trade Commission has already opened an investigation into OpenAI, the maker of the ChatGPT bot, seeking to establish whether it has violated consumer protection laws by putting personal data at risk.

In Europe, the European Parliament passed a draft law, the Artificial Intelligence Act, last month that would impose restrictions on some of the technology’s perceived riskiest applications – uses like facial recognition software, the scraping of biometric data from social media, its use in managing critical infrastructure and in determining access to public services and benefits.

Loading

There would also be transparency requirements, including disclosure of use of copyrighted material, and safeguards to prevent the systems from generating illegal content.

That draft legislation, years in the works but only recently updated to take account of the sudden and explosive growth in generative AI and public interest in the technology since ChatGPT was launched publicly late last year, is likely to be voted on either late this year or early in 2024.

In China, the authorities appear likely to force companies to obtain a licence before they can release generative AI models. China wants to remain at the forefront of AI development but also wants to maintain its stringent censorship regime.

Advertisement

Draft rules published earlier this year state that AI content should embody socialist values and not contain anything that subverted the state’s authority, advocated overthrow of the political system or undermined national unity.

In the UK, there are proposals for “light touch” regulation, using existing regulators and regulation to manage the use of AI in their sectors, supported by a new central body.

US President Joe Biden hosted a meeting of major AI companies at the White House.

US President Joe Biden hosted a meeting of major AI companies at the White House.Credit: AP

In Australia, Industry and Science Minister Ed Husic released a discussion paper last month that is the starting point to develop measures to ensure AI is used responsibly and safely.

As in the UK, there is already a myriad existing legislation and regulation, from privacy laws, health and medical device laws, cyber and data protection laws, anti-discrimination laws and even corporations laws that could apply to particular AI applications.

The central theme of the conversations about regulating AI, whether its in Europe or Australia, is that the approach should be risk-based. Some AI uses, like facial recognition or the ability to use the technology to mislead or manipulate individuals or populations, pose greater risks to society than others.

Loading

In most of the jurisdictions there is a push for greater transparency. Generative AI hoovers up data from the internet and there are concerns that models might be built on data that is erroneous or misleading, or that has been designed to mislead and manipulate.

Regulators want to know what data has been used to train these machine-learning technology, and they want AI-generated content to be watermarked so users know that the material has been produced by an AI system.

There’s also a push for human oversight of the models and external audits because of the concern that, left unchecked, generative AI could, as some its leading proponents have warned, create serious risks to, not just individuals but humanity.

Regulating AI presents a serious challenge. The widespread adoption of AI offers the promise of a transformative leap forward in productivity. It also, however, has massive implications for existing social structures and workplaces and poses a range of critical ethical questions and risks.

The scale of the opportunities means lawmakers will be loath to restrict AI’s development while the range of risks means some regulation is inevitable, as most of the leading companies involved in generative AI concede. (The cynics would say that they see regulation as a moat to protect their positions in AI against smaller, less-resourced industry contenders).

To be effective, regulation of generative AI needs to be international and broadly harmonised.

Regulators are acutely aware that social media were allowed to develop within a regulatory void for the best part of two decades before there was any serious attempt to respond to digital privacy and competition concerns. They want to jump in far more quickly at the onset of generative AI.

As with regulation of social media, the European Union has moved first and hardest and its approach is likely to form the core of the international approach (or at least the approach of the G-7 and like-minded economies) to responding to the emerging AI industry even though its draft legislation has produced howls of protest from the big technology companies and complaints from others that the regulations aren’t tougher and more intrusive.

To be effective, regulation of generative AI needs to be international and broadly harmonised.

Understanding where the training data that the models use is sourced, how they make decisions and how the systems are used and making it clear when AI-based content is being deployed are obvious imperatives, even if they may be a burden on some smaller companies building their own models.

The Europeans also want some responsibility and accountability imposed on AI companies – they are proposing fines of up to 6 per cent of global sales for breaches of their regulations – and reserve the right to simply outlaw some applications of a technology that outsources many highly sensitive decisions to machines.

Some jurisdictions might adopt a lighter tough or more decentralised approach but the central themes of the responses to the development of AI will be similar because the recognised risks to society and individuals from the continuing development of AI-enabled technologies are broadly the same in Brussels as they are in Washington or Canberra.

Loading

Developing the frameworks won’t be straightforward, given the mind-boggling and far-reaching complexities that the technology generates even at this formative phase of its development, but its potential has such widely divergent implications for economies and societies that the effort will have to be made.

Get news and reviews on technology, gadgets and gaming in our Technology newsletter every Friday. Sign up here.

Most Viewed in Technology

Loading

Original URL: https://www.smh.com.au/technology/the-world-is-scrambling-to-control-ai-20230724-p5dqpt.html