William Fry’s Rachel Hayes and Róisín Culligan examine the current cybersecurity landscape in the face of evolving technology and increasing regulations.
There is no doubt that emerging technologies, such as AI, are undoubtedly valuable tools for businesses. At the same, they also create increased cyber risks and challenges.
There has been a seismic shift in the use of AI in a cybersecurity context in recent times which, as a technology, is both the problem and the solution for businesses seeking to be prepared for – and mitigate against – cyberthreats. The development in AI technologies has seen a rise in cyberthreat detection tools but equally (and more challengingly) has resulted in a new cyberthreat landscape.
Until recently, terms like jailbreak attacks and prompt injection attacks were not well known but are now becoming more common place. In response to this new era of cyberthreats, legislators in Europe are in the midst of rolling out new laws to ensure there are harmonised rules for cybersecurity across the EU – which, for the first time, means certain organisations will need to be accountable to cyber standards.
An expected increase in threats and attacks
A recent report from the UK’s National Cybersecurity Council found that, at a global level, the ransomware threat landscape is likely to increase over the next two years. The report suggests that one reason for this is that AI has lowered the barrier for novice cyber criminals and hackers to carry out effective access and information gathering operations.
With emerging technologies such as AI, cyberattacks are becoming more sophisticated and not as easily detectable by systems or organisations. Ransomware attacks continue to dominate the cybersecurity landscape, with more organisations than ever reporting to have experienced falling victim to these attacks.
Further, the increasingly interconnected nature of supply chains means that single vulnerabilities can lead to compromises across entire networks, which was demonstrated in the 2023 Moveit data breach.
While businesses will no doubt need to face the challenges of increased threats to their systems, AI can also be a valuable tool for increasing cyber resilience.
How AI helps cybersecurity?
AI has multiple applications in cybersecurity, from fraud detection to analysis and prevention. An increasing number of organisations are already deploying AI to assist their cybersecurity personnel to defend against cyberattacks.
AI is being deployed to identify and mitigate potential cyber-risks by analysing data and detecting weaknesses in software and networks via penetration testing. The ability of AI to analyse communication patterns makes it particularly useful in recognising and intercepting phishing attempts, and even in simulating such attacks to help train the eyes of employees.
This trend is something which is even endorsed at an EU level, and under the NIS2 Directive, organisations are encouraged to make use of machine learning or AI systems to enhance their cybersecurity capabilities and the security of network and information systems.
How is AI used in cyberattacks?
On the other side of the sword, cybercriminals can leverage AI to launch sophisticated and dangerous attacks on businesses.
Social engineering schemes: Cybercriminals use AI to create convincing fake messages or calls that trick victims into revealing sensitive information or making security mistakes.
Password hacking: Cybercriminals use AI to improve the algorithms they use for cracking passwords, making them faster and more accurate.
Deepfakes: Cybercriminals use AI to manipulate visual or audio content and impersonate another individual, creating fake videos or calls that can damage reputation, spread misinformation or coerce action.
Data poisoning: Cybercriminals use AI to alter the data used by an AI system, influencing its decisions and causing it to malfunction or behave maliciously.
These are just some examples of how AI can be used for malicious purposes by cybercriminals, posing serious threats to individuals, organisations and society. Therefore, it is crucial to develop effective countermeasures and ethical guidelines to prevent and mitigate the harmful effects of AI-enabled cyberattacks.
Europe’s increased regulatory obligations
In recognition of the cybersecurity landscape in the AI era (and the introduction of other emerging technologies), the EU has revised existing frameworks and a host of new legal obligations are set to be implemented:
Under the NIS2 Directive, entities that operate in essential or important will need to implement technical, operational and organisational measures to comply with new cyber risk management and reporting obligations. These entities will further need to analyse their business operations, including any use of machine learning or AI technologies, and ensure that their business operations have robust cybersecurity systems in place.
Reflecting the serious nature with which the EU is treating the cybersecurity arena, NIS2 imposes obligations on the management bodies of in scope entities to approve the measures the business takes and oversee their implementation. With significant fines and the potential for personal liability of senior executives, NIS2 will be a game changer and board-level agenda item for a number of organisations this year considering the increased risk for senior executives.
With the text of the EU AI Act having recently been agreed by the European Parliament, organisations which deploy ‘high risk’ AI systems, as defined by the AI Act, will need to ensure they meet a number of ongoing cybersecurity standards which perform consistently throughout the system’s life cycle. This will include incorporating technical solutions to prevent, detect, respond to and control data poisoning, model poisoning, model evasion, confidentiality attacks or model flaws.
Looking further into the future, the Cyber Resilience Act, while still in draft form, is likely to be adopted this year and will require manufacturers of software or hardware with digital elements to incorporate cybersecurity into the design, development and distribution of their products.
Under the Cybersecurity Act, the EU intends to introduce the European Cybersecurity Certification Scheme for Cloud Services (EUCS). This is a voluntary scheme, applicable to all kinds of cloud services across the EU which aims to boost trust in cloud services by imposing a set of security requirements for cloud services to achieve across a number of areas, including transparency, data localisation, and sector specific requirements. The scheme is currently in draft format and is keenly followed by many large players in the tech industry.
While the EU’s focus on cybersecurity will undoubtedly lead to better protection for individual consumers and safeguard businesses from the devastating impacts of cyberattacks, establishing robust security measures across all endpoints of an organisation creates a daunting challenge.
Emerging technologies and the rapid development of AI has become essential to many businesses, introducing efficiencies and increasing productivity levels. As outlined however, these technological developments have also exposed businesses to an increased likelihood of facing cyberthreats – something which the EU has recognised in its digital strategy for the future.
Understanding these trends and reacting appropriately is crucial for any business seeking to safeguard their products and services in the digital era. Furthermore, where cybersecurity is part of the mainstream conversation at both an individual privacy level and now also a board level, businesses will need to examine the steps they can take to foster trust with consumers and business stakeholders alike.
By Rachel Hayes and Róisín Culligan
Rachel Hayes is a partner and Róisín Culligan is an associate, both at William Fry.
Find out how emerging tech trends are transforming tomorrow with our new podcast, Future Human: The Series. Listen now on Spotify, on Apple or wherever you get your podcasts.