OpenAI’s $40 billion funding deal led by SoftBank could make it one of the most valuable private firms in the world. But there is a catch. It must fully transition to a for profit model by the end of 2025 or risk losing billions, marking a major shift for the AI company.
Elon Musk’s xAI has bought social media platform X for $33 billion, calling it a major step in combining AI with real-time public conversation. Critics are concerned about data privacy and the true value of X, while others see it as a bold move to challenge AI leader OpenAI.
From quiet meetups to packed arenas, AI conferences are lighting up cities worldwide in 2025. With tech leaders, investors and innovators joining forces, these events mark a turning point as the global push toward Industry 5.0 gains speed, creativity and serious attention.
The Alarming Evolution of Cyber Threats and the Role of AI in the Dark Web
A threat actor claimed unauthorized access to API keys for AWS, Azure, MongoDB, and GitHub, triggering immediate investigations. Experts warn of severe risks, including large-scale breaches, as AI increasingly powers more sophisticated cyberattacks.
This revelation, broadcasted via a post on the social media platform X by an account known as DarkWebInformer, has sent shockwaves through the cybersecurity community, triggering immediate investigations by the affected companies and security experts globally.
The unauthorised access to API keys poses a severe risk as these keys can be used to access sensitive data, manipulate cloud resources, and potentially disrupt services.
API keys function as digital keys that allow applications to interact with cloud services. If compromised, they can lead to significant data breaches and financial losses.
Security experts warn that the exposure of these keys could lead to unauthorised access to sensitive data stored in cloud databases, manipulation or deletion of cloud resources, and large-scale data breaches affecting millions of users.
In response to the claims, representatives from AWS, Azure, MongoDB, and GitHub have issued statements assuring users that investigations are underway.
Azure and MongoDB have recommended enhanced security protocols, while GitHub has emphasised the importance of securing API keys and provided users with guidelines.
Here is a sample code snippet for rotating AWS API keys using the AWS Command Line Interface (CLI):
Creating a new access key
aws iam create-access-key --user-name <user-name>
Deleting the old access key
aws iam delete-access-key --access-key-id <old-access-key-id>
Users are encouraged to follow similar procedures for other cloud services and to stay vigilant against potential security threats.
While this incident grabs headlines today, we must also be careful to look ahead and see the wider implications of this event, specifically how artificial intelligence (AI) fits into the changing nature of cybercrime.
What was once called ‘artificial general intelligence’ and has been hailed as the even ‘singularity’ to take humanity to the next evolutionary step, is now being weaponised by threat actors for illegal purposes, including supercharging malicious botnets and transforming the sophistication and complexity of cyber-attacks.
AI can also automate these and countless other tasks unique to cyberattacks. AI can scan for vulnerabilities just as it can parse code; it can execute phishing attacks just as it can write pieces of code.
It is also excellent at cracking passwords. This automation has meant that cybercriminals can find and attack more systems in a shorter period of time with little effort.
Not only that but so, too, AI-powered social-engineering attacks. Armed with information provided by a person’s social media profile and other open-source intelligence, AI are able to craft increasingly effective skilled phishing emails – automatically tailoring content to target recipients in order to maximise the chances of success.
As noted by Kate Fazzini, a cybersecurity journalist and author, has extensively discussed the evolving threats in the cybersecurity landscape, particularly focusing on how advancements in technology, such as machine learning, are being leveraged by cybercriminals.
In her book "Kingdom of Lies: Unnerving Adventures in the World of Cybercrime," Fazzini examines the ability for
‘A machine learning-[assisted] phisher, for example, with access to details of your previous cyber-security discussions, job applications, upcoming conference registrations and holiday plans could easily use the context … to better manipulate you.’
And if the hackers try and fail? There’s always another time, and another way.
AI is also being used by cybercriminals who want to sabotage systems: malware could be taught to learn from antivirus detection technologies in order to make evasion more efficient and effective.
A more dynamic approach could prevent traditional security measures from being effective, with malware evolving, and even learning, how to avoid detection from security tools.
In addition, AI’s capacity to process huge amounts of data helps cybercriminals to identify the most attractive targets.
Millions of online destinations exist; companies with valuable intellectual property are highly likely to spend on security measures. AI can identify the most lucrative targets, and the greater the payoff, the greater the incentive to attack.
Confronted by escalating threats such as these, those organisations and individuals most heavily invested in their digital environments will have to take pre-emptive measures: rotate their API keys; implement MFA; monitor for anomalous activity; and keep up with the latest threats and best practices to deal with them.
For instance, the recent report of an API key breach for popular cloud storage providers including Amazon, Google and Microsoft reminds us that the threat landscape is constantly evolving.
As AI continues to develop, it will enable both attacks and defences. This places a significant burden on those responsible for protecting information systems.
Organisations must strengthen their defences while remaining adaptive and agile: they need to be alert to new threat trends, flexible in adjusting their security posture and take proactive action in this rapidly evolving cyber environment.
The sooner that AI is integrated into cyber defences, the better, as this new technology will play a central role in the protection of our digital assets in an increasingly complex threat environment.
NVIDIA's Blackwell Chip ignites an AI innovation race, slashing DeepSeek R1’s time to 10 seconds. Dobot’s $27,500 humanoid robot dazzles, sending stocks soaring with affordable automation flair. Alphabet’s $32B Wiz buy excites markets, yet U.S. cyberattacks cast a dark shadow over tech’s rise.
Elon Musk’s X AI platform has been hit by a massive cyber-attack, leaving users in the U.S. and UK unable to refresh feeds or access accounts. Musk confirmed the attack’s severity, pointing to IP traces from “the Ukraine area,” though experts caution that origin masking is possible.
Late last week, an extraordinary announcement signaled a dramatic shift in U.S. cybersecurity policy: the Trump administration deprioritized Russia as a leading cyber threat. Experts fear downplaying Moscow’s aggression could expose American networks to new risks and undermine national security.
Since early 2022, the British government has tied Iran to over 20 plots threatening UK citizens, reflecting Tehran’s expanding covert tactics. These attempts—spanning assassination, kidnapping, and surveillance—mark a significant escalation on British soil.