InsurTech Rising Europe gathers industry experts from a host of incumbent insurance companies, InsurTech start-ups and technology backgrounds, to discuss the drivers of change in this legacy market. Here, Sophie Roberts, a (re)insurance media specialist, covers how to use machine learning to improve underwriting decisions and claims processing, as presented by Michael Natusch, Global Head of AI, Prudential, and why data, models and user interfaces are key to cracking AI implementation strategies
Data, an intelligent agent (or model) and a user interface are all fundamental components a business needs to “get right” and “use together”, in order to build a usable and successful AI product.
“Everyone gets very excited about these three components independently, but just using one of them isn’t enough to build a true AI capability,” Natusch explained to a packed audience at the InsurTech Rising Europe 2018 conference in London.
“Just having the data, isn’t enough. Having access to a good model isn’t enough, and the user interface wouldn’t have anything to go on without the data and the model coming together. It is only when we put all three together that we have a workable combination,” he added.
But, even at this point, it’s not quite AI, he said. For automated actions to happen, Natusch talked of the “learning loop” which needs to occur between the data, model and user interface, and the customers, agents and partners. Once this take place, the AI can realise its potential and deliver the required automated actions.
What algorithm, when?
According to Natusch, the evolution of AI has been in development since the 1930s, with algorithms evolving over time. The most developed algorithms, and currently the most relevant, have been identified as deep learning, decision tree learning and probabilistic graphical models.
Prudential have been able to cut the claims processing time from nine days to 2.3 seconds – an impressive statistic which should make any business sit up and listen.
Several of Prudential’s recent AI-related releases use these three elements of AI, realising a significant amount of automation with high percentages of accuracy.
Using an example of how Prudential now processes claims forms from a Hong Kong hospital using AI, Natusch demonstrates how they have been able to cut the claims processing time from nine days to 2.3 seconds – an impressive statistic which should make any business sit up and listen.
“Through deep learning, a hand recognition model was able to recognise the Chinese characters [written on the claims form],” he explains. “From the use of gradient based decision trees, we are able to retrieve a previous record which then feeds the probabilistic model to help us understand whether or not we should approve that claim,” he said. “It has completely transformed the way we look at this process.”
Reza Khorshidi, Chief Scientist at AIG, on whether companies are embracing Machine Learning fast enough, and how it will shake the health care industry especially.
This is what Prudential are trying to build across many parts of the business, according to Natusch. “We want to use the data, build a model on top of that data, stick a user face on it - in this case it was a chatbot - that gives us some kind of user experience that hopefully leads to some sort of action which we want to learn from and feed back into our data so that next time, we’re better prepared,” he said.
Sounds complicated, but Natusch refuted this and believes that once businesses look at the applications of AI in a way that’s relevant to the business, they should start “experimenting”.
There are several factors that need to come together to make AI work. To implement AI successfully, Natusch compartmentalised these factors under “technical” and “cultural”.
Under technical, he started with data. “Even though data alone is not enough, it is without a doubt at the heart of a successful AI strategy,” he said. “None of what we’re discussing here would be built without it.”
He also listed tooling, infrastructure, people and application process interfaces (APIs). “Organisations have to be open to new systems on the network and this is why infrastructure and tooling are important areas to consider and map out as businesses explore the application of AI,” he said.
The importance of people has not gone unnoticed either as without people, new AI capabilities would not be built, he said. “The good news is that it’s 2018 and there are a lot of people that can build AI better than others before, and there will continue to be more and more people who can build this kind of thing - which we should all be excited about.”
Even though data alone is not enough, it is without a doubt at the heart of a successful AI strategy.
However, everything hinges on APIs, said Natusch. “Unless we expose a model as an API, no software engineer can access it and build that into a work flow which is being consumed by a software, that works in real time, with real customer events.” This, he believes, will be the challenge. If the models for AI are not exposed as APIs, they cannot be built into systems. APIs also adhere to rules and regulations which is an important aspect to consider when businesses think about the end customer. “This is not something your AI strategy should forget about,” he added.
Culturally, Natusch’s advice is to think small rather than big. “This isn’t about a big IT project, this is about starting small and building things, experimenting with them because in 2018, experimentation is cheap,” he said.
“It might have sounded incredibly tough when I was talking about mastering the Chinese character reading, but it actually only took us three days to get the machine learning to an accuracy of about 57%, and it took just another two weeks after that to get 89% accuracy. The key is just is to get started.”