The COVID-19 pandemic has driven us all towards more use of and reliance upon technology for reasons of remoteness from one another, and generating efficiency in tougher times. This upwards trend includes an increase in the number of businesses seeking to make better use of the data within their businesses via artificial intelligence technologies (AI). A recent European Commission study found four-in-ten (42%) enterprises had adopted at least one AI technology, with a quarter of them having already adopted at least two. A trend on the rise.
The use of such technology presents huge upsides and opportunities for business. It can reveal information from datasets in a way unseen before or that enables significant efficiency gains. Take, for example, the application of AI to historic claims data in order to drive quicker settlement. AI models are trained on the claim type, what they have settled for and why (the determining factors) in order that the level of damages to be offered to new arriving claims can be predicted and recommended in seconds. Such technology will save a claims handler 20 or 30 minutes of assessment of evidence for each claim, across thousands of claims. To give life to this example, see how UK insurers are making use of AI technology, Portal Manager, by Kennedys IQ to generate such efficiencies.
Protecting established principles
However, the design and use of such technologies (given their use of datasets) is not without its challenges in the face of EU and UK laws.
The European Union has sought to find a balanced way through both the challenges and opportunities posed by AI for a number of years. What has emerged is a commitment to developing legislation that:
- Ensures innovation is promoted and such technologies can thrive, and
- Guarantees security and protects specific individual and human rights.
This is clear from three reports approved by the European Parliament over the past year. The first report promotes a legal framework that instils a number of ethical principles:
- Human-centric, human-made and human-controlled AI
- Safety, transparency and accountability
- Safeguards against bias and discrimination
- A right to redress for those affected by such systems
- Social and environmental responsibility, and
- Respect for fundamental rights.
Plus, the design of any AI-based systems that automate significant decisions should allow for human oversight.
The second initiative looks for a uniform civil liability system based on a future regulation on liability for the operation of AI systems and the revision of the Directive on Liability for Damage Caused by Defective Products.
Finally, the third report outlines an effective intellectual property system and safeguards for the EU patent system, distinguishing between human creations with the help of AI and creations generated directly by AI. This system considers that AI should not have legal personality.
Against this backdrop, Spain, ranks above the global average in technological matters and has recently launched a National Artificial Intelligence Strategy (ENIA) with seven strategic objectives:
- Scientific excellence and innovation in AI
- Projection of the Spanish language
- Creation of highly-skilled jobs
- Transformation of the productive fabric
- Environment of trust in relation to AI
- Humanistic values in AI, and
- Inclusive and sustainable AI.
The insurance market, aware of the potential of AI to transform society and the sector itself, has been promoting the considered use and integration of AI technologies within its business, across multiple different areas; from enabling better fraudulent claim detection (again another feature provided by Portal Manager by Kennedys IQ) to allowing for tailored insurance quotes to clients based on their risk profiles and behaviours.
The insurance industry is not immune from the challenges the use of AI brings with it; the risk of reinforcing biased decision making, enabling the full automation of decisions that significantly affect individuals, and ensuring data protection principles are not disrespected. However, in line with the national and European regulatory context, insurers too have been developing a catalogue of principles to ensure the ethical use of AI: fair treatment, proactive responsibility, security, transparency, training, evaluation and review.
This includes carefully selecting providers of AI technologies who adhere to and imbed these principles in the development and supply of their product(s).
What is clear from noises being made by the different authorities and economic operators is that both at the public and private level, ethical and human considerations are essential to the development and subsequent use of AI. The use of AI requires the collection, storage and use of large sets of data (that can include personal data) as well as (when deployed in specific contexts) enable the automation of significant decisions that can affect individual rights and freedoms. Thus, the movement towards a legal framework, governed by harmonised ethics, which provides security and certainty to all economic operators, including civil society, should generate a scenario of trust and transparency that will favour a greater digital transformation with respect for individual rights and the public interest that, in turn, will result in everyone’s benefit.
Recent moves in the UK
On 11 March 2021, the UK Government set out its plans to review and strengthen the UK’s current product safety laws, which are over 30 years old, to ensure that they are fit to deal with emerging innovations and technologies including AI and amid concerns of race and gender bias inbuilt into some consumer products e.g. racist facial recognition algorithms.
In a statement, the Department for Business Energy and Industrial Strategy said
In the UK anti-discrimination legislation, notably the UK Equality Act 2010, the Human Rights Act 1998 and sector specific anti-discriminatory laws offer individuals protection from discrimination, whether generated by a human or automated decision-making system. All UK agencies have a non-delegable duty to document anticipated and potential algorithmic discrimination prior to use. The General Data Protection Regulation also allows people to opt out of fully automated decisions.
As these regulatory developments continue to evolve and improve, organisations can take some best practice steps to mitigate AI bias:
- Ensure their cultural values are inherent to any AI projects
- Ensure as diverse a design tech team as possible – ‘diversity in design’
- Seek to avoid selectivity by utilising data from a wide variety of sources
- Train AI models to ensure unwanted bias is detected and removed from output.
- In respect of applications, where possible collect data about race, sexual orientation and similar categories to monitor for bias.
- Technology could guarantee history repeats itself… unless we remove the bias
- Advanced shipbuilding – a change of risk from oak hulls to 3D printing
- Artificial intelligence and online courts
- Life Science in the era of pandemics - Part 3: The great teleheath experiment
- Life sciences: Rising to the challenge in a time of crisis - Q&A with Jenny Yu