NewsBite

Google, Cochlear spearhead partnership to harness AI to help people hear again

Peter Jackson used AI to reveal ‘secret’ conversations between The Beatles in the Get Back documentary. Cochlear and Google plan to bring that technology to those hard of hearing.

Film director Peter Jackson developed a machine learning system to complete the sound restoration of The Beatles ‘Get Back’ documentary.
Film director Peter Jackson developed a machine learning system to complete the sound restoration of The Beatles ‘Get Back’ documentary.

Film director Peter Jackson quipped that the Beatles “secret” conversations were “not drowned out anymore” after he developed a machine learning system to complete the sound restoration of the band’s ‘Get Back’ documentary.

Now a partnership between Cochlear, Google and Australian audiology researchers are looking to put similar technology in every hearing aid and implant.

The collaboration, part of Google’s Digital Future Initiative – a $1bn program to make technology more accessible – and in this case help make people hear.

For the Get Back documentary, Jackson and his team of filmmakers used artificial intelligence to teach the computer to recognise the sounds of various instruments and the band members’ voices to enable a better sonic mix.

Crucially, it allowed Jackson’s team to unearth previously hidden conversations between the fab four. They would turn up their amplifiers for a discrete chat away from the prying ears of the original 1960s filmmakers. But Jackson’s system could isolate different sounds, making some louder, softer and delivering greater clarity.

For anyone who knows someone who has a hearing aid it can sound like a solution for a common problem known as the “cocktail party problem”, when competing sounds are amplified to the same degree, exhausting the hearing aid recipient and in many cases leading them to switch off the device.

Some conversations between The Beatles in the documentary Get Back are no longer secret thanks to AI.
Some conversations between The Beatles in the documentary Get Back are no longer secret thanks to AI.

To this end, Simon Carlile, a neuroscientist and senior researcher at Google Research Australia, is quick to draw comparisons between Jackson’s work and the hearing collaboration.

“It’s the sort of outcome we’re looking for, but for the hard of hearing,” Dr Carlile says.

“We’re saying ‘how can we make the hearing loss system output be more like normal hearing by redesigning the hearing aid for acoustic hearing or cochlear implants for electric hearing?’

“I think it’s a problem for AI or at least machine learning to find out where the associations are and what we need to change in terms of the signal processing to get the outcome we’re looking for.”

Google and Cochlear are working with researchers at Macquarie University Hearing, Cochlear, National Acoustic Laboratories, NextSense, and the Shepherd Centre – which are also part of the collaboration – to solve this problem.

In normal hearing, the brain is using its 30,000 neural connections from the ear to sift through the sounds and focus on what an individual wants to concentrate on, hence the term ‘cocktail party problem’.

“Many of the sounds are smeared together,” Dr Carlile said about people who receive a hearing aid or implant.

“We lose a lot of the tonal characteristics or temporal characteristics of sounds – we can’t work out where things are in space – and so it is all different cues the brain uses for solving the cocktail party problem.”

For Cochlear, chief technology officer Jan Janssen is planning to harness AI to speed up the normally laborious process of calibrating hearing implants to the recipient‘s unique hearing pattern. The partnership labels this process “hyper-personalising”.

AI could enable a more precise calibration of a cochlear implant to better suit a listeners individual hearing pattern.
AI could enable a more precise calibration of a cochlear implant to better suit a listeners individual hearing pattern.

“We‘ve been particularly looking at the opportunities to use AI type of algorithms to help with what we call the fitting process,” Mr Janssen said.

“So when someone receives a cochlear implant, there‘s a lot of settings or parameters that have to be adjusted to the individual. We want to make sure that people can hear sounds across all the frequencies that we provide.

“That‘s a process that today is done by an audiologist in cochlear implant clinics. And so we’ve been working for a number of years now to use AI type of approaches to optimise that process so that it can be delivered in a more efficient way but at the same time, making sure that the outcomes are either equivalent or ideally better than would be would be achievable through the current process.”

Mr Janssen also highlights the potential benefits for older people who receive a cochlear implant in which it can be harder to fully restore hearing.

“People who get a cochlear implant typically go from 10 per cent work understanding to say 60-70 per cent word understanding. It’s a massively effective device. But there are still a number of aspects in hearing outcomes that we absolutely want to improve.

“A lot of people that have a cochlear implant will tell us they enjoy music but their ability, for example, with when they do simple melody tests, they don‘t score very well on it, because cochlear implants are not very good to identify a male speaker from a different male speaker, which can be a challenge for people.

“So AI can play a number of roles. We might be able to fast track that aspect if we can build representative neural models that could allow us to personalise or to optimise the stimulation more for a given individual.”

Academic director of Macquarie University Hearing, professor David McAlpine said an automated process based on an individual’s listening performance would reduce the number of return visits and the amount of tweaking required when someone gets a new device.

“Ideally, we want to map the performance of a hearing-impaired individual’s inner ear and listening brain, compare this to a model of normal hearing, and use this information to optimise the settings of their device, thereby restoring their hearing to normal or near-normal performance,” professor McAlpine said.

“This mapping would be dynamic, adapting to the environment, and reducing the need to adjust to new hearing devices, as the profile would be transferable. This approach could be used to treat all sorts of hearing disorders such as tinnitus (ringing in the ears) and hyperacusis (extreme sensitivity to sound).

“In theory, it could also help optimise any listening system, including voice recognition systems and ‘hearables’ like noise-cancelling headphones, which help improve listening performance for people with clinically normal hearing but who struggle to hear in background noise.”

The collaboration will run for three years and will be reviewed six months after its launch to ensure it is on track to helping people hear more clearly.

Read related topics:Cochlear

Add your comment to this story

To join the conversation, please Don't have an account? Register

Join the conversation, you are commenting as Logout

Original URL: https://www.theaustralian.com.au/business/companies/google-cochlear-spearhead-partnership-to-harness-ai-to-help-people-hear-again/news-story/a2885dd0916910e0174d47f7d235477b