The impact of AI on workplaces: creating a new generation of Luddites or converting reformists?

This article was co-authored by Jack Kelly, Graduate, Sydney and Joe Cunningham, Product Manager, London. 

A ‘Luddite’ historically refers to a radical faction of English workers in the 19th century who destroyed machinery during the Industrial Revolution in protest against the mechanisation of their work. When workers were replaced by digital computers during the Digital Revolution, the term evolved to refer to a person opposed to new technology and working arrangements.

Artificial intelligence (AI) and, in particular, machine learning (a subset of AI) have rapidly expedited the digital transition by reforming the way we work by automating work tasks with decreasing human input.

This article covers how employers are embracing AI across certain industries, some pitfalls to be mindful of when adopting AI systems as well as potential statutory and regulatory changes to keep an eye on.

Impact on industries: from claims handlers to social media content creators

Kennedys IQ, the technology arm of Kennedys, combines AI and machine learning with human expertise. The bespoke IQ platform consists of several features that adopts AI in a way which is explainable and able to be demonstrated to its users. This software driven offering enables efficiencies and insights to multiple parts of the claims handling and underwriting processes, distinguishing Kennedys as the legal profession transitions and adopts AI to augment legal practice.

Media content creation is similarly being reconfigured by AI. The CEO of Buzzfeed, the American digital media giant known for ceaseless lists and quizzes, announced that ChatGPT, an AI chatbot designed by Open AI, would be utilised to enhance and personalise their content, with a view to later adopting it within their editorial and business operations.

Buzzfeed’s adoption of AI for content creation shows how corporations are shifting to adopt AI’s offerings to automate previously labour-intensive drafting work.

The arbiter of recruitment: how AI is dictating HR decision making in recruitment and beyond

There are dozens of AI recruitment start-ups promising efficiency savings by reducing businesses’ time spent identifying, contacting and reviewing job candidates. ‘Fetcher’ is merely one example within a booming software industry (such as Alphabet, HireTeamMate and Oracle), that claims to combine machine learning with human insights to reduce the drain on a business’ resources recruiting employees. Curated by AI, Fetcher assists recruiters by managing key hiring criteria, capturing gender and demographics, and providing a refined set of profiles to a business for review.

Australian government agencies have also utilised AI for HR decision making but with problematic results. When Services Australia conducted an AI-backed recruitment process in 2020, the Merit Protection Commissioner found that around 10 per cent of contested promotions were subsequently overturned upon review. Commissioner Waugh determined that Services Australia’s AI decision making process had skewed the defined principles of ‘merit’ (ability, suitability and capacity), which were intended to underpin promotion decisions.

Where AI is used to replicate human-levels of decision making, AI systems can learn bias contained in training data, or use data points that a human would not consider relevant, ethical or morally correct. Whilst the benefits of AI assisted HR decision making are appealing, the Commissioner’s findings remind employers that careful consideration should be given to the type of AI being deployed in their business, and to assess AI-backed service providers, as to how the AI is developed and then test against their current practices.

Risks and upcoming regulations of utilising AI in the workplace in Australia

The Royal Commission into the unlawful ‘Robodebt’ scheme currently underway in Australia is investigating serious accusations of negligence by high-ranking government officials, including former prime ministers and a former Attorney-General. The notorious Robodebt scheme was a debt assessment and recovery program deployed by Services Australia using machine learning to review welfare payment compliance. The failed scheme was plagued with controversy as 470,000 debts were incorrectly issued, with grave impacts on recipients of an incorrect debt notice (including the deaths of some vulnerable recipients). The University of Queensland has said Robodebt’s errors were due to system design, failing to account for complexities in the different spellings of employer names or individuals’ unique work histories (for example, casual work).

Whilst the advancements of AI, and in particular machine learning, boast utility derived from automated decision-making, caution is warranted to ensure the AI product is not reliant upon data sets with embedded biases.

The Australian Human Rights Commission (AHRC) recommended legislative reform for the use of AI to address threats towards marginalised groups (i.e. people of colour, women and people with disabilities). The AHRC’s December 2022 report, recommended that a significant overhaul is required to address the threats AI poses towards historically marginalised groups where bias (conscious or unconscious) can be entrenched in data sets relied upon by AI.

One recommendation, highly relevant for employers, is to amend the Federal discrimination statutes. The current legislation requires a ‘person’ to engage in discriminatory treatment as opposed to an automated computer process (e.g. in the Sex Discrimination Act (1984) where a ‘person’ has to discriminate against another person on the ground of the sex of the aggrieved person). The AHRC’s proposed amendments to this legislation would aim to determine responsibility where an employer has adopted AI services (i.e. not a ‘person’) to make a recruitment decision which inadvertently discriminates against a candidate because of their sex. This is especially relevant where AI systems are ‘unexplainable’, referred to as a ‘black box’ system.

The Attorney-General’s Department (AGD) published a review of Australians’ privacy protections in February 2023 in light of pervasive digital technologies, AI, risks posed by facial recognition technology (such as using an individual’s physical facial features to identity them) and the collection of biometric data. Considerations in the AGD’s Privacy Act Review Report 2022 covered the use of facial recognition technology to infer information about a person’s mood and personality traits, and proposed changes to protections for employees’ information. The potential for ‘privacy codes’ to codify the collection and disclosure of employee information was canvassed; however, enhanced protections within the private sector were to be further consulted on and reviewed.

Comment

With a view to avoid being included in the next wave of Luddites, employers (and governments) are replacing an array of time consuming work tasks with AI and machine learning, where multiple workers complete the same tasks repetitively (e.g. certain HR decision making, accounting and administration).

When used effectively, AI can expedite workplace practices and reduce inefficiencies for businesses by automating tasks and streamlining processes, such as in claims handling and online content creation.

AI induced inaccuracies can lead to discrimination in decision making and legal implications when errors occur. Employers are advised to use trusted service providers and to test thoroughly for any discrimination or bias towards certain data points before adopting AI offerings.

Related items: 

Related content

Locations