In the first article of this series (“The Rise of Life Insurance Digitization”), we discussed the immense impact of COVID-19 on the life insurance industry. As “normal” life began to rapidly change, so did the life insurance industry instituting higher payouts, increased automation, automated underwriting, and existing in a world of volatile interest rates. The question is whether these COVID-related changes represent a one time “fix” or the start of a trend toward digital transformation.
Automation requires a data transformation process. The data transformation lifecycle contains eight stages – generation, collection, processing, storage, management analysis, visualization, and interpretation. In response to the pandemic, life insurance automation changes addressed the first four stages of the data lifecycle – generation, collection, processing, and storage. This laid the groundwork for a potential revolution in life insurance operations. That’s a good start. However, the game changing benefits of automation are not just automating previously manual processes but exploiting the information and insights contained in the derivative data that comes from automation. The so-called “digital breadcrumbs” left behind by automation provide a treasure trove of potential insights if managed and analyzed effectively. Life insurers can get the value out of their data by implementing the final four stages of the data life cycle – management, analysis, visualization, and interpretation.
We have the data, now what?
Today, many life insurance companies find themselves in possession of a lot of data due to the rapid automation efforts made during COVID. Organizations are asking themselves, “How can we profit from that data?” Let’s examine the final four phases of the data life cycle to help your organization unlock the value in your data and services.
- Management. The management phase entails the organization of the collected and stored data which comes in various forms; structured, semi-structured, and unstructured data. The process involves moving the data from its initial storage point into a tool designed for large scale, flexible data retrieval making it easily accessible in the later steps – analysis, visualization, and interpretation. Currently, many companies are adopting data lakehouse architectures (See: “The Data Lakehouse – Simple, Flexible, and Cost Efficient”) to help facilitate the storage and (more importantly) retrieval of structured, semi-structured and unstructured data. In addition to implementing this new architecture, life insurance organizations need to design a data governance plan to safeguard Personal Identifiable Information (PII) and data subject to Health Insurance Portability and Accountability Act (HIPAA) regulations.
- Analysis. By this stage in the transformation journey, the processes have been automated and the data has been captured and stored in an efficient retrieval platform (e.g., data lakehouse). Now comes the tricky part – gaining insights from the data. While there is no “cookbook” approach to data analysis, there are some common best practices.
- Ensure the data is as clean as possible – automated tools like OpenRefine and Talend can help.
- Define the business goals of the analysis. For example, marketing might want an analysis of how to increase conversions by analyzing the behaviors of prospects on the life insurance company’s website.
- Consider the type and amount of data to be analyzed. The size and nature of the data dictates the tool to be used. Small to mid-sized quantitative (i.e., structured) data can be analyzed with a spreadsheet. Larger data sets might require a business intelligence tool. Text analysis (from customer e-mails, for example) might use tools like Thematic or Re:infer. Sentiment analysis used to understand brand perception from social media might use a tool like IBM Watson.
- Analyzing the data. This is where the magic happens. Data scientists work with business users to achieve their goals by examining and analyzing the data stored in the data lakehouse using tools applicable to the data at hand. This is always an iterative process where the answer to one question spawns more questions.
Clean data, a good data lakehouse, understanding the business questions and the right tools are the keys to successful data analysis.
3. Visualization. In this stage, organizations should enhance their analyzed data through effective visualization which entails creating graphical representations of the data. Visualizing data makes it easier to quickly communicate your analysis to a wider audience both inside and outside your organization. Various display techniques can be used such as pie charts, histograms, scatter plots, etc. These can be published through reports, dashboards, PowerPoint, etc Also, data storytelling has become an increasingly popular approach to visualization. Data storytelling is a combination of visualizations and narrative. Regardless of visualization approach, tools like Excel, Tableau, and DataWrapper are critical to accurately display your data.
4. Action. Once the data has been captured, stored, analyzed, and visualized it must be acted upon. The answers to the business questions developed in the analysis phase must be implemented into changes to business processes, pricing models, etc. Once these actions are taken, new data is generated and the data lifecycle process, from generation through action begins anew.
Next Up – Exploring Personalized Business Models
At Infinitive, we help organizations with their digital transformation by guiding them through the steps of the data life cycle. Our expertise in technology and transformation will ensure your organization gets the value out of your data. In the next blog, we will discuss how automation and data science help life insurance companies explore new personalized business models.