Why you need to tell an AI chatbot it has to do better
If you don’t, the result may be increasingly mediocre and biased content, new research suggests.
When an AI chatbot comes up with a response that is pretty good but not exactly what you wanted, do you ask it for something better?
A new academic paper says that if not enough people do, the content we produce as a society — writing, images, coding and more — may become increasingly homogenised and biased.
Working with a generative-AI tool to improve its output through extended back-and-forth can take a lot of time and effort. For people who don’t want to go through the trouble, their unique writing styles could start to disappear as they lean on the tool to help them with emails, papers and any other writing they used to do on their own, the researchers say. For instance, the paper says: “If students use ChatGPT’s help for their homework, their writing style may be influenced by ChatGPT’s.”
The authors used statistical models to show that on a large scale, content created with the help of AI will be less unique than what users would have produced without AI.
‘Death spiral’
This becomes an even more pressing concern when the AI-generated content already spreading across the internet is used to train the next generation of AI. The researchers point to the possibility of a potential “death spiral” of homogenisation, where getting anything but a bland answer becomes more and more difficult, even if a user tries to coax more out of the chatbot.
This dynamic isn’t limited to text-based content, according to the researchers. It could apply across many domains where generative AI is being deployed, from code generation to image and audio creation, they say.
The study also notes that any biases present in the AI tools, whether political leanings or other biases, put there intentionally or not, could become amplified across society through this same process as generative AI is adopted more widely. To encourage users to push back against mediocre AI results — and so preserve greater diversity of expression — the researchers suggest making it easier for users to communicate their preferences to AI systems. That might mean designing the AI to ask follow-up questions to get a more distinct answer or asking users to rate a response, though that wouldn’t always work if a person were in a rush, concedes Sébastien Martin, an assistant professor of operations at Northwestern University’s Kellogg School of Management and a co-author of the paper.
Clear instructions
Another way to combat homogenisation is to learn from other situations where algorithms have been shown to amplify biases, such as those seen with automated hiring tools.
For example, it will be crucial to give users clear instructions and disclosures about the capabilities and limitations of the AI system they’re using so that they can understand the risks and to allow for more customization and control, says Maria De-Arteaga, an assistant professor in the information, risk and operations management department at the McCombs School of Business at the University of Texas at Austin, who wasn’t involved with the research done by Martin and his co-authors.
The Wall Street Journal
To join the conversation, please log in. Don't have an account? Register
Join the conversation, you are commenting as Logout