Insurance as an industry has always worked hard to properly assess risk, and to investigate claims, but the data revolution creates some grey areas.
One is the problem of unintentional discrimination – the insurer’s artificial intelligence (AI) algorithms might judge someone as being a higher or lower insurance risk because they belong to a particular demographic group, driven by factors such as age, sex, income or ethnicity.
“As companies know more and more about that, if things get inaccessible for the poorer risks, is that fair, is that the appropriate use of data?” asked Ms O’Driscoll.
Social media’s role is also being assessed. Social media data can improve risk assessment in the insurance industry, assisting fraud detection capabilities. Insurers can look at their customers’ social media activities and compare it with their claim records, looking for any differences. Banks use social media in much the same way.
“Using social media as an indicator on somebody's risk or their propensity to default is a very grey area,” said Zoe Willis, KPMG Partner Data and RegTech.
“Should you use social media analytics, or the data that people post about themselves online, to drive that? There are a lot of very deep conversations being had on that.”
Potential discrimination and data privacy are emerging risks in many industries. Retail, for example, has come under scrutiny for the use of facial tracking technology, which can capture the faces of shoppers, and cross-reference biometric data potentially to identify known shoplifters, with the ability - in a “smart”, or inter-connected shopping centre - to alert tenants as the tracked person moves around the mall.
The abilities that technology can give businesses can often “over-excite” them, said Scott Guse, partner, audit, assurance and risk consulting, at KPMG.
“That is a situation where the technology's wonderful, and it has great application for stopping leakage in a retail business, so the retailers love it, but those ethical elements are very difficult,” he said.
“If people get too excited about the technology and its capability, you could argue that they can lose sight of the bigger ethical picture.”
The use of data, and of any technology, should always be considered in terms of the social licence to do so, said Ms Willis.
“For example, that (facial recognition) technology can be used to spot vulnerable people that may need help, as well as people likely to offend,” she said.
“Where we can, I think we have to focus on the positives that can come from the use of this technology, as well as the negatives. If we’re keeping the social licence front-of-mind we can keep the focus on the benefits to society as a whole.”
Surveillance is a “particular minefield” for the insurance industry, said Ms O’Driscoll.
“The justification for surveillance is the extent to which people actually try to defraud insurance, and fraudulent claims drive up the cost of insurance for everyone,'' she said. ''But on the other hand, the industry better understands ... the interaction of injury, and claims for injury, with mental health.
“So instead of the focus of surveillance being on potential fraud, the industry is changing that to a focus on the fact that, where possible, people will be better off getting back to work.”
According to Jason Smith, board director at the Risk Management Institute of Australasia, it ultimately comes back to the human element.
“Whatever technology you're using, for whatever role, you need to maintain human oversight,” he said.
“Things like AI and machine learning can do a lot by themselves in the background, but ultimately you need to have a governance structure that drives human oversight.
“You need to have governance in place that ensures that the data that you're collecting is actually fit for purpose, and you need to make sure that you've got absolute transparency - not only with what you're doing internally with the data, but also externally - so that what you’re doing with the data is actually delivering a benefit to your customer, and to society.”