AI in the workplace – an invaluable tool or a slippery slope?
This article was co-authored by Eva Rurlander, Trainee Solicitor, London.
The global COVID-19 pandemic brought about many changes in working practices including the rapid acceleration of the use of Artificial Intelligence (AI) in employment to allow individuals and businesses to continue operating remotely. AI continues to be a hot topic and in late 2021 the UK Government published a National AI Strategy, setting out its 10 year plan to make Britain a “global AI superpower”.
However, although AI continues to develop rapidly with advancements in machine learning, , no new legislation has yet been introduced in the UK to regulate and control the ways in which it may be used. There is of course a raft of employment law legislation which governs how employers may treat their employees, but this was not designed with such technological developments in mind. Is this legislation still fit for purpose and what particular risks are presented by the use of AI in the workplace?
How is AI used in employment?
The time and effort involved in recruitment, such as drafting job applications, screening cvs and conducting interviews/assessment days, is significant for employers who are likely to repeat the process to fill just one vacancy. AI is being increasingly used to create job descriptions, to carry out sifts of applications by the identification of key words, scheduling interviews and even to analyse candidates’ body language and speech patterns in video interviews. The use of such technology can create significant time savings for busy managers.
Onboarding and performance management
Although such practices are the subject of some concern regarding ethical use, AI may also assist with monitoring and reviewing employee performance, including monitoring the activities of employees working remotely, and, having identified any performance failings, can be used to automatically allocate new work to underutilised members of staff, thereby increasing staff efficiency and performance.
Monitoring employee welfare
Less commonplace but a potential area of development is the use of AI to monitor employee wellbeing e.g. stress levels in order to better support them and thereby increase staff retention and productivity. In 2020, PWC ran a pilot scheme to assess the impact of the COVID-19 lockdown on staff mental health and wellbeing. As part of this scheme, lifestyle and biometric data, including information on heart rate and sleep, was collected from wearable devices worn by 1,000 employees. This data was then assessed by various AI programmes in order to identify any health concerns. The collection of such data can assist employers to identify and reduce the workload of overworked and/or stressed employees thereby providing support to address potential health issues before they escalate. This is both beneficial to employees and valuable to employers. Early identification of such issues is likely to reduce staff sickness levels and improve productivity and staff retention.
On the face of it, using AI to assist with decision making in recruitment and performance management may be less risky than relying on the opinions and views of individual managers who may have personal biases and therefore have the potential to discriminate against candidates and employees.
However, all AI data has inevitably had some human input at some point in its creation. Some types of algorithms are trained using historic labelled data e.g. using historic data about successful interview candidates in order to select similar candidates in future recruitment exercises. There is therefore the potential for ‘algorithmic discrimination’ arising from unrepresentative or out-of-date data which may result in underrepresented groups continuing to be underrepresented in the workplace, even when individuals from these groups may be well suited for the role.
In 2018, Amazon stopped using an automated recruitment tool after it was shown to discriminate against female candidates.
The relevant algorithm had regard to application and recruitment patterns in the company over the last 10 years when applicants were predominantly male. As a result, the algorithm was biased to favour male applicants.
Employers will remain legally responsible for any decisions they make that are reliant on AI systems. As decisions (such as selecting candidates for interviews or appointing successful candidates to positions) will not be made because of any particular protected characteristic, they are unlikely to be directly discriminatory. However, the real risk for employers is that the use of AI systems which have not been subjected to rigorous testing and validation processes may give rise to claims that they have indirectly discriminated against candidates and/or employees.
Indirect discrimination occurs where an employer applies a provision, criterion or practice (PCP) to all employees. However, it has the effect of placing individuals with a certain protected characteristic (e.g. sex, disability, race) at a particular disadvantage. In such a case, the application of the PCP will be discriminatory unless it can be justified as being a proportionate means of achieving a legitimate aim.
The application of, for example, an automated cv screening programme will inevitably amount to a PCP and, as detailed above, it is possible for such a practice to place individuals with certain protected characteristics at a disadvantage in terms of recruitment opportunities. Therefore, unless the use of the particular PCP can be justified, it will be discriminatory.
Another risk is that AI systems may not be able to fully address an employer’s obligation to make reasonable adjustments for disabled individuals. This obligation, which applies equally to job applicants as well as employees, arises where an individual meets the definition of disabled as set out in the Equality Act 2010 and the employer is aware, or ought to reasonably be aware, of the disability. Technology to analyse speech and facial expressions of candidates in interviews may not identify where speech issues or particular facial expressions or tendencies, such as an inability to make eye contact, arise as the result of a medical condition. Equally, automated scoring of written assessments may not take account of medical conditions such as dyslexia.
Therefore, adopting such systems without questioning or understanding the underlying technology and without adopting any additional human checking systems means otherwise competent candidates may miss out. This also leaves employers exposed to discrimination claims.
Data protection issues
Although there is currently no specific legislation preventing or limiting the use of AI in the workplace, existing data protection law provides some protection for individuals who may find themselves subject to AI decisions.
The usual principles of the Data Protection Act 1998 (DPA) and General Data Protection Regulation (GDPR) apply wherever AI uses or collects personal data of job candidates or employees. Should AI be used to process health information about employees, this will amount to special category data and therefore the more onerous regulations regarding the processing of such data will apply.
In addition, Article 22 of the DPA provides that, except for a limited number of exceptions, individuals have the right not to be subject to a decision based solely on automated processing which produces legal effects concerning him or her or similarly significantly affects him or her.
One permitted exception is where an individual provides their explicit consent to the particular automated processing and therefore it would be advisable for employers to seek to obtain consent from job candidates and employees before implementing any fully automated AI systems.
In 2021, the EU published a proposal on AI regulation which aimed to address its risks and suggested that AI involved in employment decisions should be classified as ‘high risk’ and subject to certain specific safeguards.
As yet, no such legislation has been introduced in the UK. However, there are growing calls for AI use to be regulated with the Trades Union Congress (TUC) being particularly vocal on this issue. It therefore appears likely that additional regulations and controls will be introduced in the foreseeable future. As a minimum, it is expected that additional guidance will be provided, and the UK Government is currently working with the Alan Turing Institute to provide updated guidance on AI ethics and safety in the public sector.
There is little doubt that AI will continue to innovate employment practices and that the use of AI technology can have significant benefits for both employers and individuals alike. However, there are clearly also risks and employers need to be cautious about blindly accepting and relying on AI systems without understanding the technology behind them and how automated decisions are made.
- Technology could guarantee history repeats itself… unless we remove the bias
- Technology and health and safety in the workplace: an aid not a replacement
- Maturity framework needed to guide safe and responsible use of AI by financial services industry
- The impact of AI on workplaces: creating a new generation of Luddites or converting reformists?
Kennedys has launched a campaign on Fostering innovation. The campaign will include four impactful events and thought provoking content exploring how innovation can secure the future of the insurance industry and the businesses that it serves.Find out more