Auquan is reshaping financial services with AI agents that automate research, risk, and ESG reporting. Trusted by top global institutions, its platform removes manual work so teams can focus on strategy, not formatting.
AI cheating tool Cluely has raised $5.3 million to offer real time, undetectable support during interviews, exams, meetings, and more. Creator Chungin “Roy” Lee says the tool redefines cheating, arguing it helps people work smarter—not break the rules.
Spur, an AI driven startup, has raised $4.5 million to automate website testing. Users type commands like “add to cart” or “apply for a job,” and Spur’s agent simulates the action, detects bugs and gives instant feedback, making quality checks faster and easier for development teams.
From Telegram's Data Sharing to AI-Driven Election Interference: Unveiling Cyber Threats on Social Platforms
Cybercriminals and state-sponsored actors exploit social media for espionage and disinformation. Telegram is under fire for sharing data with Russia’s FSB, prompting Ukraine to restrict it. OpenAI's Ben Nimmo fights AI-driven disinformation targeting U.S. and European elections.
CNC Cyber Pulse: Social Media Exploitation and AI-Driven Disinformation in Global Statecraft
In today's digital landscape, social media platforms have become pivotal arenas for both syndicated cybercriminals and state-sponsored espionage activities. Two recent developments underscore how these platforms are being leveraged to influence public opinion, disrupt democratic processes, and conduct covert operations. This analysis delves into the multifaceted role of social media in facilitating cybercrime and statecraft, as well as the emerging impact of artificial intelligence in these domains.
Telegram Under Fire: Russian Access Confirmed, Ukraine Responds with Platform Restrictions
Previously, CNC reported on significant shifts within Telegram following legal pressures faced by the platform's leadership. On September 23, 2024, Telegram announced a policy change to comply with valid legal requests, agreeing to share user IP addresses and phone numbers to enhance moderation and cooperation with authorities. This shift includes deploying an AI-supported moderation team and launching a bot for reporting illegal content, marking a substantial change in the platform's approach to user privacy and content management.
However, this new stance highlights Telegram's complex history with data sharing, especially in relation to Russia. Reports confirm that since 2018, Telegram has provided Russia’s Federal Security Service (FSB) with access to user data—a level of cooperation denied to Western authorities. Ukraine’s National Coordination Centre for Cybersecurity recently limited Telegram use in defence sectors, citing Russian intelligence exploitation. Former NSA Director Rob Joyce noted,
“The idea that he [Durov] could come and go while defying Russia is inconceivable,”
emphasising the geopolitical nuances of Telegram’s data-sharing practices.
The fallout has been swift in underground circles, where discussions on forums such as Exploit and Cracked reveal a strong push for alternative platforms. Nearly every major forum, from Exploit to Cracked, has opened threads to discuss migration options, with many users advocating for platforms such as Jabber, Tox, Matrix, Signal, and Session.
Source: YouTube, CNN, Ben Nimmo
AI and Election Security: OpenAI’s Ben Nimmo Leads the Fight Against Foreign Disinformation
An Editorial Extract and Review by CNC on Ben Nimmo—"This Threat Hunter Chases U.S. Foes Exploiting AI to Sway the Election"
As the United States approaches the 2024 presidential election, the intersection of artificial intelligence and election security has become increasingly critical. Ben Nimmo, the principal threat investigator at OpenAI, is at the forefront of efforts to counter foreign disinformation campaigns that leverage AI technologies. According to a report issued by The Washington Post, Nimmo has discovered that nations such as Russia, China, and Iran are experimenting with tools like ChatGPT to generate targeted social media content aimed at influencing American political opinion.
In a significant June briefing with national security officials, Nimmo's findings were so impactful that they meticulously highlighted and annotated key sections of his report. This reaction underscores the growing urgency surrounding AI-driven disinformation and its potential impact on democratic processes. While Nimmo characterises the current attempts by foreign adversaries as "amateurish and bumbling," there is a palpable concern that these actors may soon refine their tactics and expand operations to more effectively disseminate divisive rhetoric using AI.
A notable example from Nimmo's recent investigations involves an Iranian operation designed to increase polarisation within the United States. The campaign distributed long-form articles and social media posts on sensitive topics such as the Gaza conflict and U.S. policies toward Israel, aiming to manipulate public discourse and exacerbate societal divisions.
Nimmo's work has gained particular significance as other major tech companies reduce their efforts to combat disinformation. His contributions are viewed by colleagues and national security experts as essential resources, especially in the absence of broader industry initiatives. However, some peers express caution regarding potential corporate influences on the transparency of these disclosures. Darren Linvill, a professor at Clemson University, remarked that Nimmo
"has certain incentives to downplay the impact,"
suggesting that OpenAI's business interests might affect the extent of information shared.
Despite these concerns, OpenAI and Nimmo remain steadfast in their mission. Nimmo continues to focus on detecting and neutralising disinformation campaigns before they gain momentum, aiming to safeguard the integrity of the electoral process from foreign interference amplified by artificial intelligence. His efforts highlight the critical role of vigilant monitoring and proactive intervention in protecting democratic institutions in the age of AI-driven misinformation.
Australia is facing a double threat to its financial security: cyberattacks on major superannuation funds and the fallout from Trump’s “Liberation Day” tariff declaration. Both have exposed deep vulnerabilities in retirement savings, leaving Australia’s future wealth increasingly at risk.
Major cyber alliances are buckling. Australia’s super funds are under digital siege, the US slashes cyber defenses, and Five Eyes unity is faltering. As threats mount from China and Russia, the West’s fractured response risks emboldening adversaries and weakening global cyber resilience.
Elon Musk’s X AI platform has been hit by a massive cyber-attack, leaving users in the U.S. and UK unable to refresh feeds or access accounts. Musk confirmed the attack’s severity, pointing to IP traces from “the Ukraine area,” though experts caution that origin masking is possible.
Late last week, an extraordinary announcement signaled a dramatic shift in U.S. cybersecurity policy: the Trump administration deprioritized Russia as a leading cyber threat. Experts fear downplaying Moscow’s aggression could expose American networks to new risks and undermine national security.