KIA&B_SepOct2023-3 | Seite 8

GENERATIVE AI AND EMERGING RISKS

OPPORTUNITIES ABOUND WITH AI , BUT RISKS SURFACE AS WELL
The rapid development of generative artificial intelligence ( AI ) systems opens opportunities for underwriting and claims processes . It also raises questions of ownership and associated risks .
Large data sets form the core of generative AI models , the latter a set of algorithms that can create seemingly realistic content like text , images , or audio using training data . 1
WHO OWNS WHAT ?
The collection of the large data sets raises questions around ownership of the data , and also concerns about data quality and biases . Ownership is still a relatively new field but already , in the US and UK , a few lawsuits have been filed to challenge the data used in and produced by generative AI models . 2 Recently , a class action case was filed by several artists whose images had been used to train generative AI tools . 3 In the meantime , several platforms have banned AIgenerated art . 4
In the US and in Europe , class actions against the technology sector have been on the rise . 5 This poses a risk for the insurance industry as class actions can be expensive . While legal disputes , together with existing and new legislation , will help establish some guiding principles , the AI industry will likely adapt to new tech-generated disruption in similar fashion to other sectors that have been disrupted by innovation . 6 For example 15 years ago , the music and movie industries suffered losses and struggles over rights related to illegal downloads . Then with innovation came platforms allowing consumers to listen and view online content legally , while at the same time compensating rights holders .
CONCERNS AROUND DATA QUALITY AND BIASES
Another risk relevant for the insurance industry is that publicly-available generative AI systems can discourage people to seek professional medical , legal or financial advice . These systems make predictions using training data only . They do not access real-time data and thus may generate misleading information . Based on this information , an individual may decide not to seek , for instance , medical help when really a visit to the doctor should be the course of action . The outcome can be a worsening of health status , requiring more intense ( and expensive ) treatment in the future and , in turn , higher claims on health insurance policies .
The question comes down to the quality and quantity of data used in the development of generative AI systems . An example from Finland , prior the recent developments of generative AI systems , illustrates how regulators may find the use of non-individual data discriminative .
The National Non-Discrimination and Equality Tribunal prohibited a credit institution from using a decision-making method based on criteria such as gender , first language , age and residential area , the said criteria itself based on assumptions derived from general statistical data and information on payment defaults . The Tribunal stated that the creditworthiness of an applicant became weaker than it would have been using other information . 7
Further , several Data Protection authorities investigate complaints related to the use of and invented ( non-accurate ) personal data derived from
6 KANSAS INSURANCE AGENT & BROKER