Artificial Intelligence Gets Real: Three Ways AI Is Making Inroads in P&C Insurance Claims
For many of us, artificial intelligence (AI) still seems like the stuff of science fiction, but in reality, we interact with AI everyday through devices like Amazon Echo and Google Home.
Gartner named AI a top strategic trend for 2017, and according to a recent study by Accenture, 85 percent of insurance executives surveyed plan to invest significantly in artificial intelligence over the next three years. The value of AI applications in insurance is clear—it supports human decision making in a multitude of ways that could streamline the claims process, reduce fraud, and result in the better all-around outcomes for both claimant and insurance company. The insurance industry itself is at an inflection point in terms of AI. There are so many AI-related technologies, they are all in different stages of development, and there are many things they can and can’t do—yet. The first step is to understand what each of these technologies is and where they have the potential to impact the claims process. First, let’s cover the basics: artificial intelligence is a broadly used term to describe the concept of machines carrying out activities that would normally require human intelligence to do. There are many different technologies that are considered AI. In this article, the general managers of each of Mitchell’s business units break down a few of these technologies—computer vision, machine learning and natural language processing—and explain how they are beginning to be used in the P&C industry.
Machine learning is powering intelligent claims processes
The insurance industry has plenty of data, but turning that data into actionable insights is easier said than done. That’s where machine learning comes in. Very simply put, machine learning is a field of computer science that enables computers to learn without being explicitly programmed to do so. It can quickly review large quantities of data, organize it, extract information from it, and even make recommendations. But to really understand the value of machine learning, it’s helpful to understand the types of problems it can solve and insights it can glean. Here are a couple of examples:
- Machine learning can be used to make predictions. By analyzing historical prescribing patterns and claim outcomes, it could be used to identify claimants at risk of opioid abuse. This would make it possible to intervene with clinical programs and prevent the abuse early in the process or prevent it all together.
- Machine learning can be used to generally detect anomalies—identifying anything on a claim that is atypical or just “odd.” By flagging claims in this way, anomaly detection can be used for a wide range of purposes, from clinical intervention to detecting fraud to just alerting an adjuster to review a file.
These are just a few, simple examples—the possibilities are limitless. Machine learning could potentially impact almost every stage of the claims process. And each step is a step closer to an intelligent claims process, one in which decisions are made more quickly, with greater efficiency, and with better outcomes for insurer and claimant.
Computer vision is driving more than just self-driving cars
One reason artificial intelligence is particularly relevant to the P&C and collision repair industry is because of the role it plays in computer vision—and one of the most relevant applications for computer vision is self-driving cars. Computer vision basically seeks to enable computers to ‘see’ images and extract information from them, in much the same way a human does. It goes beyond sensors that simply capture data. It layers in deep learning—the ability to actually perceive, interpret and respond to what’s happening in the environment. The ability is essential for vehicles to be truly autonomous. But there are other use cases for computer vision in insurance—ones that are having an immediate impact on the claims process. Take, for example, a couple of steps in the physical damage claims process that are based primarily on visual inspection: first notice of loss and repair vs. replace decisions. With technology available today, photos taken by consumers and submitted via mobile device as part of first notice of loss could be used to inform a decision about whether or not the vehicle should be declared a total loss, potentially saving a costly tow to a repair facility. Similarly, these images could be used to determine whether to repair or replace a damaged part.
While these are just two use cases, a recent report by Tractica indicates that the global computer vision market is expected to grow to $33.3 billion by 2019. Ultimately, both insurer and insured benefit from a streamlined claims process, and computer vision is just one of the many AI technologies available to deliver on that. For more of Debbie’s thoughts on artificial intelligence and computer vision, read her blog post: Computer Vision—from Diagnosing Cancer to Transforming the Claims Process.
Natural Language Processing Isn’t Just for Customer Service Anymore
From Geico’s virtual assistant, Kate, that answers basic policy and billing questions within an app, to Lemonade’s chatbot, Maya, that signs people up for renters insurance and even processes simple claims, virtual assistants and chatbots are proliferating in the insurance industry. In fact, in a recent Accenture study of the insurance industry, 68 percent of respondents said their companies use some sort of AI-powered virtual assistant in at least one segment of their business. The technology that enables chatbots to interpret language is called natural language processing (NLP). NLP hasn’t yet advanced to the point where it can understand complex conversational language, but it can understand, ask questions and provide suggestions within a given context. Despite its limitations, it is already beginning to move out of the customer experience arena and into the enterprise in really interesting ways. Companies like Tableau Software and Rhiza are finding ways to integrate it into data analysis, and they are even incorporating voice interfaces—think Amazon Echo and Google Home—along the way.
Tableau’s prototype software, Eviza, enables users who are looking at data visualizations, like points on a map showing earthquakes, to use basic queries to drill into the data—along the lines of “show me the area that had the strongest earthquake.” Rhiza offers a commercial product called the Rhizabo that enables sales and marketing teams to create data visualizations for presentations, simply by asking questions out loud. As Tableau and Rhiza demonstrate, as natural language processing and voice interfaces mature, chatbot functionality is poised to move from customer-facing interactions to behind-the-scenes claims processes, but the concept and the potential value are similar. Ultimately, natural language processing will likely make the vast amounts of casualty, workers' compensation and other data easier to access and more actionable.