Access Denied

This page requires users to be logged in and using a specific plan to access its content.

If you believe this is an error or need help, please contact
support@cybernewscentre.com


Login or Sign Up
⭠ Back
The alarming revelation that Artificial Intelligence (AI) is being exploited by cybercriminals, as reported by Canada's top cybersecurity official, illuminates the dark side of AI technology.
Copy Page Link
Editor Alexis Pinto
July 24, 2023

https://www.cybernewscentre.com/plus-content/content/artificial-intelligence-the-double-edged-sword-of-cybersecurity

You have viewed 0 of your 5 complimentary articles this month.
You have viewed all 5 of your 5 complimentary articles this month.
This content is only available to subscribers. Click here for non-subscriber content.
Sign up for free to access more articles and additional features.
Create your free account
follow this story

The alarming revelation that Artificial Intelligence (AI) is being exploited by cybercriminals, as reported by Canada's top cybersecurity official, illuminates the dark side of AI technology. Like a double-edged sword, AI, originally touted as a powerful tool to address cybersecurity threats, is now being used for malicious purposes, from creating sophisticated phishing emails to spreading disinformation. This signals a new era in the cybercrime landscape, where the advancement in technology opens up new opportunities for misuse.

Sami Khoury, head of the Canadian Centre for Cyber Security, has reported an uptick in AI's exploitation in cybercrimes. In a recent interview, he mentioned that AI is being used in crafting deceptive emails and misinformation campaigns, confirming what many cyber watchdog groups have been warning about for some time now. The specifics are currently scant, but the declaration alone adds a sense of urgency to the rising chorus of concern regarding the misuse of AI.

The advent of large language models (LLMs), AI programs that can craft realistic dialogue and documents, have complicated the cyber landscape. These sophisticated models, such as OpenAI's ChatGPT, have the potential to impersonate an organisation or individual convincingly, posing a significant threat to cybersecurity. A Europol report released in March reiterated this threat, warning of the possibility of AI-enabled impersonation. Simultaneously, the National Cyber Security Centre in Britain highlighted the risk of criminals using LLMs to carry out advanced cyber attacks.

We've already started seeing glimpses of this AI-fuelled dystopia. Cybersecurity researchers have already demonstrated malicious uses of AI. For instance, a former hacker recently discovered an LLM trained on malicious material, which produced a convincing phishing email asking for a cash transfer. This indicates that the threat is not just hypothetical—it's already here.

"I understand this may be short notice," the LLM said, "but this payment is incredibly important and needs to be done in the next 24 hours."

However, the true concern lies not just in the fact that AI can create convincing phishing emails or malware. Khoury warned about AI's rapid evolution, noting that the pace of its development makes it challenging.

The promise of AI to deliver faster, more efficient cyber attacks, aided by the ability to learn and adapt, could pose unprecedented challenges to cybersecurity.

As AI continues to develop and infiltrate every aspect of our digital lives, it's clear that it will play an increasingly significant role in cybersecurity. While AI presents an opportunity to enhance our defences, its misuse underscores the need for stringent safeguards. It's a race against time for security professionals, policymakers, and AI developers to create a robust framework that can prevent AI from becoming the next superweapon in the cybercriminal arsenal.

"Who knows what's coming around the corner" - Sami Khoury on AI Models

Khoury said that while the use of AI to draft malicious code was still in its early stages - "there's still a way to go because it takes a lot to write a good exploit" - the concern was that AI models were evolving so quickly that it was difficult to get a handle on their malicious potential before they were released into the wild.

Sami Khoury, head of the Canadian Centre for Cyber Security

"Who knows what's coming around the corner," he said.

The Canadian Centre for Cyber Security, alongside global cybersecurity agencies, has a significant task ahead. They must work to understand these emerging threats, develop countermeasures, and educate the public about AI's potential misuse. While it's a daunting task, it's crucial to ensure that AI remains a tool for progress and doesn't become a weapon of destruction.

Sources

The alarming revelation that Artificial Intelligence (AI) is being exploited by cybercriminals, as reported by Canada's top cybersecurity official, illuminates the dark side of AI technology. Like a double-edged sword, AI, originally touted as a powerful tool to address cybersecurity threats, is now being used for malicious purposes, from creating sophisticated phishing emails to spreading disinformation. This signals a new era in the cybercrime landscape, where the advancement in technology opens up new opportunities for misuse.

Get access to more articles for free.
Create your free account
More Cyber News