In our latest webinar covering emerging trends in cyber and data, Kennedys’ global cyber team shared their predictions for 2025.
One such prediction was an evolution in the way AI will be leveraged by cybercriminals in order to increase the effectiveness of social engineering methods as a mechanism to secure access to an organisation’s systems.
Given a recent uptick in cyber attacks stemming back to cybercriminals’ use of Microsoft Teams, might the future already be here?
What is social engineering in the cyber context?
Social engineering describes the ways in which cybercriminals manipulate their victims into revealing sensitive information, irrespective of how strong their cyber security posture might be.
Speak to any expert in cyber and they will confirm that, no matter how much money a company spends on IT security software, their employees remain the weakest link.
The prime example of social engineering is ‘phishing’, typically via email, whereby a cybercriminal coaxes an employee into clicking on a malicious link by making the request to do so as convincing as possible (often by masquerading as a known person). Once the link has been clicked, the employee will typically be prompted to surrender their credentials, or the link itself might contain malware designed to ensure unauthorised access to systems.
How has AI been used by cybercriminals up to this point?
In order to make phishing emails as sophisticated as possible, cybercriminals have been using AI-powered large language models (like ChatGPT) for some time. Such platforms are being taken advantage of by cybercriminals to make their content as sophisticated, and therefore convincing, as possible.
Say, for example, the CEO of a business was very active on social media. A particularly savvy cybercriminal might plug some of that content into a platform like ChatGPT to tailor a phishing email bespoke to that individual, thus increasing the chances of a malicious link being clicked.
Or better still, if they secured access to a supplier’s email inbox, then the supplier’s previous correspondence with the target company could be channelled through the AI platform to create a convincing email requesting payment of a fabricated invoice. This is something Kennedys’ cyber team have been seeing for some time now.
How might AI be used by cybercriminals in the future?
In our recent webinar, we predicted an increased use in both voice recognition and deep fake technology as an evolution of the way in which AI is making social engineering even more effective for cybercriminals.
This is something we are already starting to see more of in the very early stages of 2025. In recent weeks, there has been a rise in cyber attacks stemming from an evolving social engineering method concerning Microsoft Teams. In short:
- The cybercriminal will send a flurry of deliberately suspicious looking emails to an employee of a target company.
- Where that company allows external communications with third parties on Microsoft Teams, that same cybercriminal will then start a new chat with the employee, pretending to be someone from the company’s IT team.
- Through the chat function, the fake IT member will encourage the employee to allow remote access to their desktop in order to investigate the suspicious looking emails they sent in the first place.
- With remote access secured, the cybercriminal can then move laterally within the company’s systems in order to deploy ransomware, or otherwise seek to monetise their unauthorised access.
In this situation, AI is being used in several possible ways, including to research the IT member they want to pretend to be, in order to be as convincing as possible in the Microsoft Teams chat whilst typing messages.
Through voice recognition and/or more accessible deep fake technology, cybercriminals can continue to evolve their use of AI in order to progress their social engineering methods in the future.
If you heard, or even saw, the person you were expecting to speak with in IT when support is needed, would it cross your mind that it might be a cybercriminal using AI? The more ‘cyber conscious’ employees might ask the IT member to call them via Teams, or even set up a video link.
What might the legal ramifications look like?
Like with any cyber attack, unauthorised access to personal or otherwise sensitive data can give rise to a myriad of regulatory, contractual and other legal risks.
Even those organisations that have spent a significant amount of money on cyber security could be exposed to these risks, simply because an employee was manipulated into allowing a cybercriminal remote access to systems because of the evolving social engineering techniques described above. Cyber security expenditure is rarely a defence to the regulatory investigations, contractual implications and other claims that can follow.
As with any form of social engineering, employee training needs to account for evolving cyber attack methods, and senior leadership must know which experts to turn to when things go wrong. Senior leadership must also be alive to, and prepare for, the increased risk of D&O exposures, including the possibility of post-cyber shareholder derivative actions.
For all the very positive ways in which an organisation can use AI for good, cybercriminals are already using AI to improve their own nefarious processes. The trick is keeping up.