Versa Network’s AI director Sridhar Iyer discusses cybersecurity in the age of AI and the skills required to handle future cyberthreats.
Despite the escalating importance of cybersecurity, a dangerous skills gap is looming large across the industry. According to the latest reports, 50pc of all UK businesses have a basic cyber skills gap, while 33pc are lacking advanced skills. The issue has been persistent for several years.
Most concerningly, the issue isn’t just a shortfall in the number of qualified professionals; it’s also a matter of the changing nature of the skills required.
Cybersecurity is an extremely dynamic field, and its skills requirements are constantly changing based on new technologies and threats.
So, as organisations are trying to close this existing gap, the urgency for acquiring more up-to-date expertise is also ramping up.
However, advanced technologies like artificial intelligence (AI) are poised to change the game. They offer robust solutions for under-staffed security teams grappling with an ever-mutating landscape of threats. While AI can handle repetitive tasks and sift through vast amounts of data to identify potential risks, the technology can also help develop the skill set needed for tomorrow’s cybersecurity professionals. This transformation is not only inevitable but also urgently needed to defend against a new generation of sophisticated cyberthreats.
The burden on security teams
Cybersecurity teams today have the herculean task of managing and guarding a constantly growing IT ecosystem. The influx of security alerts and massive volumes of telemetry data from different security products and digital applications overwhelms these teams daily. In fact, a Forrester survey revealed that 66pc of security professionals experience burnout or suffer from extreme stress on a regular basis, while 51pc suffer from mental health issues.
‘A dangerous skills gap is looming large across the industry’
At the same time, budgets remain stagnant, and legacy processes have become overwhelmed, struggling to scale in a rapidly evolving environment. These challenges have financial repercussions too; the cost per security breach has skyrocketed to over $4 million, and network outages can cost up to $300,000 per hour. For CISOs and CIOs, the strain is palpable, making the current state of affairs almost unmanageable.
This complexity doesn’t only add to the workload; it fundamentally changes the type of work required. Legacy methods of manual threat detection and analysis are not suited to a new landscape characterised by advanced persistent threats and rapidly evolving AI-powered malware.
AI and machine learning technologies can become a cornerstone for managing this complexity. Advanced AI capabilities, such as alert triage, real-time threat identification and adaptive micro-segmentation, can parse through large volumes of telemetry data, separating credible threats from false positives. If organisations are able to leverage AI to handle some of these more routine tasks, they can reallocate their time and skills to perform high-value tasks that typically require human insight and strategic thinking such as proactive threat hunting, advanced forensic analysis, security architecture design and incident response planning.
AI efficiencies for security teams
AI can greatly enhance the capabilities of security teams by automating complex tasks, providing actionable insights and improving overall security response times. Here are some specific examples of how AI can assist:
AI algorithms can save security teams hours of work by pre-processing files and code snippets in real time to identify malware or vulnerabilities. Notably, AI techniques have proven effective in eliminating zero-day attacks across 90pc of common file types, drastically reducing the reliance on more traditional legacy signature-based security measures.
By establishing a baseline of normal network, user and device behaviour, AI can detect deviations that may signify an attack, such as abnormal traffic patterns or unexpected access attempts. By continuously assessing user behaviour, systems can alert or automatically isolate suspicious devices and accounts in real time using adaptive micro-segmentation.
AI can use historical data to predict and identify potential future attack vectors, allowing security teams to proactively strengthen defences.
AI can assist in the coordination of response activities, suggesting or automating actions to contain and mitigate threats, such as isolating affected systems or blocking suspicious IP addresses.
AI systems can automatically gather the context around alerts and incidents, pulling related data from various sources to speed up the investigation process.
By providing real-time analysis and recommendations, AI helps security teams make informed decisions quickly, which is crucial during a potential security incident.
AI can analyse incoming emails for signs of phishing, such as suspicious attachments or anomalies in the header information, reducing the risk of successful email-based attacks.
By understanding user behaviour, AI can identify potential insider threats or compromised accounts through actions that deviate from the established pattern.
AI can aggregate and analyse threat intelligence from various sources, helping teams stay informed about the latest threats and ensure that security measures are up to date.
AI can prioritise vulnerabilities based on the potential impact and the current threat landscape, helping teams to address the most critical issues first.
Chatbots and virtual assistants
AI-powered chatbots and virtual assistants can provide immediate assistance to team members or employees, offering guidance on security protocols or assisting with common security queries.
Automation of routine tasks
AI can handle repetitive tasks such as patch management, configuration updates and log monitoring, freeing human analysts to focus on more strategic initiatives.
The skills evolution
To maximise these advantages, there’s a growing need for professionals who can manage and interpret AI-driven security systems. The human element has not been eliminated; rather, it has been elevated.
As security operations leverage more AI tools, professionals who once spent their days going through logs and setting static security rules must now adapt. The future belongs to those who can efficiently interpret the outputs of AI algorithms and manage complex AI-driven systems.
So, understanding the foundational concepts of AI and machine learning today is becoming as crucial as knowing the ins and outs of network protocols. Skills in data science, especially in manipulating and understanding large data sets, will be indispensable. This shift transcends the traditional divisions of IT roles. It’s not just the security analysts who need to adapt but also CISOs and even board members, who need to develop more nuanced knowledge and understanding of the changing risk landscape.
‘The future belongs to those who can efficiently interpret the outputs of AI algorithms and manage complex AI-driven systems’
For example, consider the concept of adaptive micro-segmentation enabled by AI, which continuously assesses user behaviour and isolates potential threats in real time. Unlike traditional, rule-based security frameworks that are static and require periodic manual updates, this dynamic system adapts security protocols in real time to emerging threats. Understanding this dynamism and knowing how to optimise security policies accordingly will be a sought-after skill set.
A new ethics
The adoption of AI also brings new ethical considerations into the fold. For instance, generative AI tools could be ‘poisoned’ by threat actors or could inadvertently create false data, leading to biased actions and decisions. Without thoughtful understanding of the repercussions of automated actions, AI-driven policy enforcement might result in overly restrictive measures that infringe on user privacy.
As such, cybersecurity professionals must be equipped not just with technical expertise, but also with a deep understanding of ethical implications. This includes skills like conducting ethics impact assessments for AI tools and implementing policy-based controls that are both effective and ethical.
A working knowledge of regulations like General Data Protection Regulation (GDPR), as well as ongoing developments like the EU AI Act, is also important.
Overall, the technological revolution of AI provides powerful tools for combating increasingly sophisticated cyberthreats, but it also demands a workforce equipped with a new blend of skills and ethical understanding. Organisations that invest in upskilling their workforce to navigate this intricate landscape will not only survive but thrive in the cybersecurity challenges of tomorrow.
By Sridhar Iyer
Sridhar Iyer is the director of machine learning and AI at Versa Networks. He leads the adoption of machine learning, AI and cloud deployments across Versa Networks’ products. He has a BEng in Computer Science and Engineering from Visvesvaraya Technological University and an MSc in Computer and Information Science from Syracuse University.
10 things you need to know direct to your inbox every weekday. Sign up for the Daily Brief, Silicon Republic’s digest of essential sci-tech news.