Access Denied

This page requires users to be logged in and using a specific plan to access its content.

If you believe this is an error or need help, please contact
support@cybernewscentre.com


Login or Sign Up
⭠ Back
In an era where a single vulnerable computer may face upwards of 2,000 cyberattacks daily, the deployment of Artificial Intelligence (AI) is no longer optional but a necessity to manage the deluge of security breaches.
Copy Page Link
Mark De Boer
Editor Alexis Pinto
November 6, 2023

https://www.cybernewscentre.com/plus-content/content/navigating-the-ai-frontier-policymaking-in-the-age-of-cybersecurity-innovation

You have viewed 0 of your 5 complimentary articles this month.
You have viewed all 5 of your 5 complimentary articles this month.
This content is only available to subscribers. Click here for non-subscriber content.
Sign up for free to access more articles and additional features.
Create your free account
follow this story

AI's Critical Impact on Cybersecurity Policy

In an era where a single vulnerable computer may face upwards of 2,000 cyberattacks daily, the deployment of Artificial Intelligence (AI) is no longer optional but a necessity to manage the deluge of security breaches. However, it appears there is a significant gap in the cybersecurity arena—AI talent dedicated to security lags behind, outnumbered thirtyfold by those focusing on AI development. It wasn't until post-2016 that we observed a meaningful uptick in AI-driven cybersecurity solutions.

The critical importance of AI in the domain of cybersecurity has been reinforced by experts like Admiral Mike Rogers, the former director of the National Security Agency and U.S. Cyber Command. He emphasises, as cited by Max Smeets in The National Interest,

"Artificial Intelligence and machine learning are integral to the future of cybersecurity... It's a question of when, not if."

This highlights an urgent call to action for policymakers to deepen their understanding of AI technology to effectively shape the cybersecurity landscape of tomorrow.

Figure 2: Geographic distribution of research papers presenting a novel AI for cybersecurity solution

Cyber Persistence vs. NIST Framework: A Strategic Analysis

To this end, the insights provided by esteemed researchers like Igor Kovač from the University of Cincinnati and his colleagues from the Jozef Stefan Institute are invaluable. They scrutinise the application of 700 distinct AI algorithms across the cybersecurity spectrum using the National Institute of Standards and Technology (NIST) framework, aiming to unravel how these innovations align with the cybersecurity objectives of identify, protect, detect, respond, and recover.

Their analysis presents a counter-narrative to the conventional discourse that suggests a numerical dominance in AI publications equates to leadership in AI for cybersecurity.

They reveal that the United States, buttressed by EU collaborations, is at the forefront of developing groundbreaking AI solutions in this domain. This finding is not just a matter of national pride; it has significant implications for understanding where the epicentre of cybersecurity innovation truly lies.

Moreover, the distribution of these AI applications across the NIST categories uncovers a trend: 47 percent are dedicated to detection tasks. This is particularly intriguing when juxtaposed with the strategic evolution towards Cyber Persistence Strategy, as seen in the U.S. National Cybersecurity Strategy revisions. 

The researchers highlight a discordance between the anticipatory nature of Cyber Persistence Theory and the NIST framework, positing that the latter's lack of focus on anticipation may limit its applicability to modern cybersecurity strategies.

The cautious approach states must take when deploying AI, given the scarcity of suitable training data and the potential for escalatory responses, only compounds the complexity of the issue.

Their findings also underscore that learning algorithms are the predominant method used in AI for cybersecurity, a testament to the critical role of machine learning in modern cyber defence mechanisms.

Despite this, certain areas like penetration testing and behaviour modelling have seen less innovation, suggesting a possible satisfaction with existing technologies or a lack of perceived need for advancement.

Figure 2: Geographic distribution of research papers presenting a novel AI for cybersecurity solution

Beyond the technicalities, the researchers broach the contentious topic of regulation. They warn against the perils of overregulation, which could stifle innovation and hinder the competitive advantage of the United States in the geopolitical landscape. 

Conversely, they advocate for a regulatory environment that preserves democratic values and enhances transparency, particularly in the use of AI within intelligence and security agencies.

Upon evaluating 700 distinct AI algorithms tailored for cybersecurity and categorizing them according to the NIST framework, it emerges that nearly half—47%—are dedicated to the detection of anomalies and cybersecurity events. Not far behind, 26% are designed for proactive protection measures.

The remaining solutions are divided between identification and response capabilities, at 19% and 8% respectively. 

This distribution is somewhat unexpected, given the trend towards adopting a Cyber Persistent Engagement Strategy, which can be observed in the evolution of cybersecurity strategies from both the U.S. (from 2018 to 2023) and within the EU’s member states, alongside their political discourse and policy actions.

Figure 3: Distribution of research papers presenting a novel AI for cybersecurity solution according to the cybersecurity purpose

The Policy Implications of AI in Cybersecurity 

Addressing this conundrum involves multiple facets. The concept of Cyber Persistence Theory diverges from the NIST framework by incorporating a critical element of foresight which is absent in the latter.

Cyber Persistence Theory advocates for a proactive posture in cybersecurity—actively identifying and securing against one's digital weaknesses before they can be exploited and leveraging adversaries' vulnerabilities to bolster one's security posture. This proactive stance is central to maintaining a strategic advantage in cyberspace.

The Proactive vs. Reactive Cybersecurity Dilemma

Integrating this proactive approach within the more reactive NIST framework could potentially misrepresent its strategic essence. Additionally, governments typically exhibit caution in adopting new technologies; the idea of entrusting AI with autonomous decision-making in cyber response actions invokes concerns of unintended escalation, akin to the hesitancies around the deployment of autonomous vehicles due to liability issues. 

Moreover, for AI to be effectively implemented in cybersecurity response, it requires extensive datasets for training. Such comprehensive datasets, specifically tailored for training AI in reactive cybersecurity measures, remain in short supply, posing a significant hurdle to the practical application of AI in this context.

Looking toward the burgeoning era of artificial intelligence, one must ponder a critical question: How will nations collectively navigate the ethical quandaries of AI adoption? 

The dilemma is multifaceted, weighing the stimulation of research and innovation against the necessity of crafting astute policies and regulations that are both practical for businesses and respectful of ethical boundaries. 

Will AI serve as a catalyst for enhanced safety and as a versatile tool to address our cyberspace and societal quandaries, or will it usher in a prohibitive cost for businesses, entangling them in a complex web of regulatory compliance as it permeates industry after industry, shaping trade and consumer interactions in profound ways?

Our editorial aims to stoke the fires of critical discourse, assessing the delicate interplay between the fortifying potential of AI technology and its ability to amplify threats to a staggering degree. 

Does keeping stride with the ubiquitous integration of AI into our social and economic systems necessitate a deeper, more widespread commitment to education and grassroots training?

Navigating this delicate balance requires policymakers to operate with insight and foresight, arming themselves with the distilled wisdom of academic research. 

It is a journey through a maze where one must be keenly aware of AI's complexities and strategic significance, striving to foster an environment ripe for innovation while vigilantly protecting democratic values.

Therefore, this narrative is not merely a call to action but an invitation to policymakers to incorporate a deep understanding of AI into the genesis of cybersecurity legislation and policy-making. 

With AI increasingly becoming a strategic asset in the global arena, the scholarly work of experts like Kovač and his colleagues is indispensable. Their research lights the path for enacting policies that synchronise the pace of technological breakthroughs with conscientious ethical governance and the safeguarding of national security.

AI's Critical Impact on Cybersecurity Policy

In an era where a single vulnerable computer may face upwards of 2,000 cyberattacks daily, the deployment of Artificial Intelligence (AI) is no longer optional but a necessity to manage the deluge of security breaches. However, it appears there is a significant gap in the cybersecurity arena—AI talent dedicated to security lags behind, outnumbered thirtyfold by those focusing on AI development. It wasn't until post-2016 that we observed a meaningful uptick in AI-driven cybersecurity solutions.

Get access to more articles for free.
Create your free account
More Cyber News