Access Denied

This page requires users to be logged in and using a specific plan to access its content.

If you believe this is an error or need help, please contact
support@cybernewscentre.com


Login or Sign Up
⭠ Back
In a chilling development, a threat actor has claimed unauthorised access to API keys for some of the largest cloud service providers, including Amazon Web Services (AWS), Microsoft Azure, MongoDB, and GitHub.
Copy Page Link
Thomas Ricardo - Cyber Analyst Reporter
May 27, 2024

https://www.cybernewscentre.com/plus-content/content/the-alarming-evolution-of-cyber-threats-and-the-role-of-ai-in-the-dark-web

You have viewed 0 of your 5 complimentary articles this month.
You have viewed all 5 of your 5 complimentary articles this month.
This content is only available to subscribers. Click here for non-subscriber content.
Sign up for free to access more articles and additional features.
Create your free account
follow this story

In a chilling development, a threat actor has claimed unauthorised access to API keys for some of the largest cloud service providers, including Amazon Web Services (AWS), Microsoft Azure, MongoDB, and GitHub.

This revelation, broadcasted via a post on the social media platform X by an account known as DarkWebInformer, has sent shockwaves through the cybersecurity community, triggering immediate investigations by the affected companies and security experts globally.

“ANYRUN malware sandbox’s 8th Birthday Special Offer: Grab 6 Months of Free Service”

The unauthorised access to API keys poses a severe risk as these keys can be used to access sensitive data, manipulate cloud resources, and potentially disrupt services. 

API keys function as digital keys that allow applications to interact with cloud services. If compromised, they can lead to significant data breaches and financial losses.

Security experts warn that the exposure of these keys could lead to unauthorised access to sensitive data stored in cloud databases, manipulation or deletion of cloud resources, and large-scale data breaches affecting millions of users.

In response to the claims, representatives from AWS, Azure, MongoDB, and GitHub have issued statements assuring users that investigations are underway.

They have advised immediate measures, including the rotation of API keys, the implementation of multi-factor authentication (MFA), and heightened monitoring for unusual activity.

Azure and MongoDB have recommended enhanced security protocols, while GitHub has emphasised the importance of securing API keys and provided users with guidelines. 

Here is a sample code snippet for rotating AWS API keys using the AWS Command Line Interface (CLI):

Creating a new access key

aws iam create-access-key --user-name <user-name>

Deleting the old access key

aws iam delete-access-key --access-key-id <old-access-key-id>

Users are encouraged to follow similar procedures for other cloud services and to stay vigilant against potential security threats.

While this incident grabs headlines today, we must also be careful to look ahead and see the wider implications of this event, specifically how artificial intelligence (AI) fits into the changing nature of cybercrime. 

What was once called ‘artificial general intelligence’ and has been hailed as the even ‘singularity’ to take humanity to the next evolutionary step, is now being weaponised by threat actors for illegal purposes, including supercharging malicious botnets and transforming the sophistication and complexity of cyber-attacks.

AI can also automate these and countless other tasks unique to cyberattacks. AI can scan for vulnerabilities just as it can parse code; it can execute phishing attacks just as it can write pieces of code.

It is also excellent at cracking passwords. This automation has meant that cybercriminals can find and attack more systems in a shorter period of time with little effort.

Not only that but so, too, AI-powered social-engineering attacks. Armed with information provided by a person’s social media profile and other open-source intelligence, AI are able to craft increasingly effective skilled phishing emails – automatically tailoring content to target recipients in order to maximise the chances of success. 

Image: Kate Fazzini, author of Kingdom of Lies. Source: C-SPAN

As noted by Kate Fazzini, a cybersecurity journalist and author, has extensively discussed the evolving threats in the cybersecurity landscape, particularly focusing on how advancements in technology, such as machine learning, are being leveraged by cybercriminals. 

In her book "Kingdom of Lies: Unnerving Adventures in the World of Cybercrime," Fazzini examines the ability for 

 

‘A machine learning-[assisted] phisher, for example, with access to details of your previous cyber-security discussions, job applications, upcoming conference registrations and holiday plans could easily use the context … to better manipulate you.’

And if the hackers try and fail? There’s always another time, and another way.

AI is also being used by cybercriminals who want to sabotage systems: malware could be taught to learn from antivirus detection technologies in order to make evasion more efficient and effective.

A more dynamic approach could prevent traditional security measures from being effective, with malware evolving, and even learning, how to avoid detection from security tools.

In addition, AI’s capacity to process huge amounts of data helps cybercriminals to identify the most attractive targets.

Millions of online destinations exist; companies with valuable intellectual property are highly likely to spend on security measures. AI can identify the most lucrative targets, and the greater the payoff, the greater the incentive to attack.

Confronted by escalating threats such as these, those organisations and individuals most heavily invested in their digital environments will have to take pre-emptive measures: rotate their API keys; implement MFA; monitor for anomalous activity; and keep up with the latest threats and best practices to deal with them. 

For instance, the recent report of an API key breach for popular cloud storage providers including Amazon, Google and Microsoft reminds us that the threat landscape is constantly evolving.

As AI continues to develop, it will enable both attacks and defences. This places a significant burden on those responsible for protecting information systems.

Organisations must strengthen their defences while remaining adaptive and agile: they need to be alert to new threat trends, flexible in adjusting their security posture and take proactive action in this rapidly evolving cyber environment.

The sooner that AI is integrated into cyber defences, the better, as this new technology will play a central role in the protection of our digital assets in an increasingly complex threat environment.

Key Insights

  • Major Cloud Service Breach: A threat actor claimed unauthorised access to API keys for AWS, Azure, MongoDB, and GitHub, prompting immediate investigations and security measures from the affected companies.
  • Severe Risks and Expert Warnings: Unauthorised access to these keys could lead to significant data breaches, manipulation of cloud resources, and financial losses, with experts warning of potential large-scale impacts.
  • AI in Cybercrime: The incident highlights the growing use of AI by cybercriminals to automate and enhance attacks, making them more sophisticated and difficult to detect, emphasising the need for robust, adaptive cybersecurity measures.

In a chilling development, a threat actor has claimed unauthorised access to API keys for some of the largest cloud service providers, including Amazon Web Services (AWS), Microsoft Azure, MongoDB, and GitHub.

Get access to more articles for free.
Create your free account
More Cyber News