Artificial intelligence (AI) could bring enormous potential benefits to the insurance industry but regulators should put in place a bespoke AI Maturity Framework to assist its safe and responsible adoption.
This would represent a risk-based approach that achieves fair and equitable customer outcomes whilst being an enabler for insurers. Such an approach would allow innovation to thrive, and technology to evolve without continuous need for regulatory updates on technical definitions.
Alongside our sister technology business, Kennedys IQ, we responded to a discussion paper on AI in financial services issued by the Financial Conduct Authority, Bank of England and Prudential Regulation Authority.
Recognising the potential of AI, we suggest the authorities prioritise three areas: improved consistency – “AI has the potential through standardised processes to assist decisions made by experts” – improved efficiency and enhanced customer experience.
With regard to third area, our response included: “AI can be used to provide personalised and convenient customer experiences, such as through the use of chatbots or personalised financial advice. Furthermore, AI can be used to create new products and services that are more accessible to people with disabilities or other special needs.
“The delivery of financial services requires a high degree of empathy. It is, therefore, imperative that those responsible for the design of customer products consider how empathy can be embedded into technology, in order to increase and maintain public confidence in the industry.
However, the benefits that AI can bring to the customer need to be balanced with the risks. Maturity frameworks have been developed by the likes of Microsoft, IBM and Gartner to guide advances in AI and manage those risks. These frameworks are at a high level, however, and we argue that the financial services industry needs a more tailored approach that would allow insurers to differentiate between different types of activity and classes of business, and also between speciality and general insurance.
The framework put forward would have six key elements:
(1) Data collection: Data acquired by the user or of third parties must be for a legitimate use and collected in a way that complies with relevant laws and regulations. Users or third parties would audit the data on a regular basis so as to limit and prevent bias.
(2) Training: A risk assessment and defined methodology would be required when considering the training set of data used for an algorithm. The training set would be auditable to limit and prevent bias but in a different way to point 1 – the user would articulate the coverage and/or volume of data required to create safe and responsible algorithms.
(3) Testing/monitoring: Building trust into a system is of paramount importance. Testing and monitoring is a continuous activity and should be regulated in terms of time and intervals undertaken.
(4) Explainability: Clear articulation of outcomes and of inference is required to mitigate irresponsible and unsafe use of algorithms, and is vital in relation to legal, ethical and reputational risks too.
Take the use of an AI system to determine a customer’s car insurance premium. Historically insurers have regarded women as safer drivers than men. In recent years however, it has not been possible for insurers to use gender as a variable. The criteria of fairness on the one hand, and inclusiveness of the other, need to be carefully balanced and the output of any AI system be clearly understood.
(5) Transparency: This concept is broader than explainability and provides a right for individuals to understand how their data is being used, how decisions are arrived at, and the policies and ethical principles behind the implementation of algorithms.
(6) Legitimate use: Legitimate use of any one algorithm should be made by someone who has the skills and understanding to assess all the other elements of the framework. Such an individual should be viewed as having a role similar to that of a data protection officer. This person would be accountable for meeting the requirements of the regulatory framework and implementing any changes necessary. They would understand the full lifecycle of any one product and work closely with the software provider where it is not developed in-house.
The AI developer, such as a software provider would also adopt a version of the framework so that all stakeholders have a clear narrative of intention, use and AI lifecycle.
Our response included: “The proposed AI Maturity Framework presents the opportunity of a scaled approach depending on the application of AI and associated risks: the more risk, the more risk assessment activity is required. This approach intends to offer a balance to the administrative burden from regulation and avoid stifling innovation.”
Kennedys partner and Head of Innovation, Richard West, says: “Balancing the excitement of the potential of AI with the cutting edge associated risks is crucial. It will be vital to bring stakeholders and public opinion along as products develop.
“We believe a sectoral framework would support the growth of innovation in financial services, including insurance firms, whilst providing the necessary governance to ensure the safe and responsible route to AI adoption.”