NewsBite

ACCA exhibition challenges AI, ChatGPT and power of phones

This exhibition is a sobering reminder that while totalitarian nations like China steal data, the west gives it away for free.

People tend to forget that their smart phones are tracking them.
People tend to forget that their smart phones are tracking them.

Artificial Intelligence – now reduced to the acronym AI – has been in the news for some time, but more recently in the headlines, particularly with the emergence of ChatGPT and then rival programs. This new interest in AI and growing awareness of its potential benefits and harms can perhaps be explained by the rise of retail or consumer applications of the technology, which reminds us that until now we have all been the object and target of AI data-harvesting and manipulation, not subjects or drivers of the process.

One of the recent concerns about the new consumer AI is that it will corrupt – or further corrupt – assessment systems in schools and especially universities. Everyone has read about ChatGPT’s ability to produce an academic essay on almost any topic within seconds, and many of my colleagues have experimented with different subjects and question formats. In general, the results seem to be impressive enough to cause real concern, but on difficult or recondite subjects where online information is scarce or faulty, they can be riddled with mistakes.

There is little doubt, however, that on the kind of topics that will arise in high school and undergraduate university courses – well-worn and subject to banal institutional rubrics – AI either already can or very soon will be able to produce essays that look plausible and almost indistinguishable from legitimate ones. Ironically, the giveaway may be that the AI essay is more grammatically correct than the work of many students; to quote Zoltan Karparthy in My Fair Lady, “Her English is too good, which clearly indicates that she is foreign”.

It will certainly be a serious blow to online university courses. These have always been dubious at best, lacking human contact with lecturers and fellow-students and especially weak in their assessment procedures. The Covid lockdowns gave all of us an experience of online teaching, and the outcome is generally agreed to have been catastrophic, both for learning and for mental wellbeing. One thing that became clear is that while you can actually teach and even learn online, assuming good-will and energy on the part of both teacher and student, you cannot assess online with any kind of integrity.

Machine Listening looks at how digital assistants are taught to speak with emotion. Installation view, Australian Centre for Contemporary Art, Melbourne. Courtesy the artist. Photograph: Andrew Curtis
Machine Listening looks at how digital assistants are taught to speak with emotion. Installation view, Australian Centre for Contemporary Art, Melbourne. Courtesy the artist. Photograph: Andrew Curtis

What the new AI applications mean, however, is that no form of assessment other than in-person examinations now has integrity. We have all seen plagiarism of various kinds – sometimes ludicrously naive, like copying texts from Wikipedia – and we know that some university students and even pupils at school have long had assignments done for them by someone else, whether paid or not. But now that essays can be produced in seconds, they can no longer be counted for assessment purposes. Perhaps the solution will be to make essay submission voluntary but non-assessed, so that they will only be written and handed in by those students who are really dedicated, and who understand that this extra effort will contribute to a better performance in the examination.

Behind these recent headlines, though, the ubiquitous use of AI by governments and corporations, while often written about, seems to escape most people’s understanding, probably because it is simply too pervasive and embedded in technologies that we all take for granted. People forget that their smart phones are tracking them, that all their messages are recorded and date-stamped, that all their social media posts and comments are harvested by algorithms that can be used for many purposes, from relatively innocent marketing to much more questionable newsfeed selection and worse: AI has been used by Russia and China to sow discord, exacerbate social polarisation and drive extremist radicalisation in the United States, because a divided nation is a weak nation.

China, of course, is in the vanguard of the new digital totalitarianism, where facial-recognition software tracks citizens in the street and monitors their behaviour, and where this and all other data collected on them from their business and social media activities is collated by algorithms that give each person a social credit score with corresponding rights or restrictions. But while a totalitarian nation like China steals data, we in the western world have been almost pathetically eager to give away what was once our private information. We use facial recognition software for the convenience of opening our computers; many people have so-called digital assistant devices in their homes that are potentially recording and profiling everything they do, even the most private and intimate activities.

This is the world that is considered from various angles in Data Relations at the ACCA – a group exhibition whose title hints at the increasing intrusion of AI data processing into our political, social and even personal relations. One of these installations reflects on, but also intervenes in, the systems that track the effectiveness of advertising on various platforms. A bank of computers has been programmed to search for articles about climate change on the web, and then to send an army of bots to click on all the advertising banners that appear with these articles. It is a simple illustration of the way that the apparent level of consumer engagement can be manipulated and statistics distorted.

Another work is in reality an illustrated lecture that has been turned into an installation, and the lecturer himself into a troll – both a figure of fairytales and an aggressive or disruptive online commentator – whose monstrous, digitally produced features appear on an oval screen in the centre. The subject of the lecture is in part a highly innovative data analytics company called Palantir, founded by Peter Thiel (also co-founder of PayPal and the first outside investor in Facebook, among other things) and named after the magical seeing-stone in Tolkien’s Lord of the Rings. Palantir has developed powerful data analytical systems that have been used in intelligence – including countering Chinese AI attacks – in the military and in government administration, particularly in identifying patterns of fraudulent behaviour, but also in helping manage response to the Covid pandemic in several countries.

The troll, or his maker Zach Blas, has several criticisms of the culture inside Palantir, but a broader target is alluded to in the work’s title, Metric Mysticism. This is Blas’s name for a peculiarly Californian hybrid of AI thinking – which purports to be the most advanced frontier of rationality – with an unexpected vein of mysticism and magic. What Blas is alluding to is perhaps a more widespread phenomenon, if we consider the extraordinary popularity of magic and the supernatural in books and films of the last generation or so, coinciding with the wide availability of personal computers and the rise of the internet about 30 years ago, and then the appearance of smart phones 15 years later.

Winnie Soon tries to make graphically visible the inherently silent and elusive regime of digital censorship in China. ACCA/the artist/Andrew Curtis
Winnie Soon tries to make graphically visible the inherently silent and elusive regime of digital censorship in China. ACCA/the artist/Andrew Curtis

In any case, as this work suggests, although the analytical power of computers has exploded so that an iPhone now has more memory and faster computing power than a supercomputer of a generation earlier, the actual workings of this technology are so opaque and inaccessible to the layman – so unlike the relatively intelligible mechanics of even complex analog devices – that the masters of the new universe themselves tend to think of their effects as akin to magic.

Two other works are more abstruse, although still intriguing, and like most of the installations in the exhibition, exist on the border between art and scientific research. The first of these, Machine Listening, invites us to sit in a small room in which chairs have been lined up in a row, rather as though we were waiting our turn to see a doctor or enter a laboratory. What we hear is fragments of speech delivered with varying tone and intention; the work draws on the processes by which machines, and especially the digital assistants mentioned earlier, are taught to speak with natural expression, but also how they are trained to detect nuances of emotion in the speech they hear or overhear.

The other is by Winnie Soon, and is an attempt to make graphically visible the inherently silent and elusive regime of digital censorship in China. There are several aspects to the installation, but the most striking is a wall on which Chinese characters momentarily appear and then suddenly disappear again. This is an attempt to give some idea of the frequency of critical posts on the social media platform Weibo, and the rapidity with which any terms – any characters – deemed offensive or even potentially dangerous, are deleted.

The most interesting works of all, though, are no doubt by Lauren McCarthy, and two in particular out of several installed in a large gallery. One of these is a video work in which the artist is ostensibly offering to act as a surrogate mother for several couples who, for whatever reason, are unable to have the child themselves. The video follows her meetings with these couples as they discuss the conditions of the surrogacy and the artist’s suggestion that her clients would have access to an app which would monitor and even control all aspects of her body during the pregnancy. The work is like a thought experiment in which we are made to consider the possibility of relinquishing all privacy and agency, for a fixed period, to two people who are essentially strangers.

In Surrogate, 2022, Lauren Lee McCarthy is ostensibly offering to act as a surrogate mother for several couples who, for whatever reason, are unable to have the child themselves. Installation view at ACCA courtesy the artist/Andrew Curtis
In Surrogate, 2022, Lauren Lee McCarthy is ostensibly offering to act as a surrogate mother for several couples who, for whatever reason, are unable to have the child themselves. Installation view at ACCA courtesy the artist/Andrew Curtis

McCarthy had already pondered these questions in an earlier work, simply titled Lauren and produced in 2017, only three years after the launch of Amazon’s Alexa. In this project, McCarthy played the role of a human digital assistant, and duly spent some time with each of the individuals who appear in the video, which cuts from one to another, as they recount their experience of the service that the Lauren assistant offered.

The reactions are varied and sometimes surprising, like that of one young man who is grateful for Lauren’s suggestion that he get a regular haircut, and now feels more confident with women. Another couple speaks of getting used to the idea that she is always there, watching everything they do. Two gay men hint they might like more privacy for intimate moments. One young woman admits that she feels self-conscious when Lauren is about to appear and feels she should make sure her hair is presentable.

In what is possibly the most memorable moment in the video, we find a young man sitting alone in his room playing a set of bongo drums. The fish-eye lens view – presumably Lauren’s view – makes the room somehow seem more enclosed and cut off from the world outside. The young man observes that he enjoyed talking to Lauren – that it was “like having a friend, only the conversation is always about me”.

There is something quite sobering about the unselfconsciousness of this egoism, and the implication is that this young man probably doesn’t have many real friends. But more fundamentally, it suggests that is our own moral weakness, immaturity and self-centredness that makes us vulnerable to the manipulation of AI algorithms.

Data relations, Australian Centre for Contemporary Art, to March 19.

Read related topics:China Ties

Add your comment to this story

To join the conversation, please Don't have an account? Register

Join the conversation, you are commenting as Logout

Original URL: https://www.theaustralian.com.au/arts/review/is-the-rise-of-ai-bots-good-for-art/news-story/6f57c2cd63d1cc9aafc053ec7a0b10db