Biden’s climate incentives face uncertainty as Trump’s renewed tariffs push Chinese solar giants like Trina Solar to relocate production to the US via partnerships. This shift signals a new energy arms race, intensifying global competition in 2025.
OpenAI proposes bold U.S. alliances to outpace China in AI, advocating for advanced infrastructure and economic zones. Meanwhile, SMIC, China’s chip giant, faces U.S. restrictions but remains optimistic, leveraging AI-driven demand for legacy chips to sustain growth amid global challenges.
Big Tech returns to offices, Musk shapes AI policy, and Trump’s comeback fuels debates on tech-politics fusion. Biden-Xi talks spark questions on U.S.-China relations as global power shifts. From Silicon Valley to the White House, this week reshaped the future in surprising ways!
From Telegram's Data Sharing to AI-Driven Election Interference: Unveiling Cyber Threats on Social Platforms
Cybercriminals and state-sponsored actors exploit social media for espionage and disinformation. Telegram is under fire for sharing data with Russia’s FSB, prompting Ukraine to restrict it. OpenAI's Ben Nimmo fights AI-driven disinformation targeting U.S. and European elections.
CNC Cyber Pulse: Social Media Exploitation and AI-Driven Disinformation in Global Statecraft
In today's digital landscape, social media platforms have become pivotal arenas for both syndicated cybercriminals and state-sponsored espionage activities. Two recent developments underscore how these platforms are being leveraged to influence public opinion, disrupt democratic processes, and conduct covert operations. This analysis delves into the multifaceted role of social media in facilitating cybercrime and statecraft, as well as the emerging impact of artificial intelligence in these domains.
Telegram Under Fire: Russian Access Confirmed, Ukraine Responds with Platform Restrictions
Previously, CNC reported on significant shifts within Telegram following legal pressures faced by the platform's leadership. On September 23, 2024, Telegram announced a policy change to comply with valid legal requests, agreeing to share user IP addresses and phone numbers to enhance moderation and cooperation with authorities. This shift includes deploying an AI-supported moderation team and launching a bot for reporting illegal content, marking a substantial change in the platform's approach to user privacy and content management.
However, this new stance highlights Telegram's complex history with data sharing, especially in relation to Russia. Reports confirm that since 2018, Telegram has provided Russia’s Federal Security Service (FSB) with access to user data—a level of cooperation denied to Western authorities. Ukraine’s National Coordination Centre for Cybersecurity recently limited Telegram use in defence sectors, citing Russian intelligence exploitation. Former NSA Director Rob Joyce noted,
“The idea that he [Durov] could come and go while defying Russia is inconceivable,”
emphasising the geopolitical nuances of Telegram’s data-sharing practices.
The fallout has been swift in underground circles, where discussions on forums such as Exploit and Cracked reveal a strong push for alternative platforms. Nearly every major forum, from Exploit to Cracked, has opened threads to discuss migration options, with many users advocating for platforms such as Jabber, Tox, Matrix, Signal, and Session.
AI and Election Security: OpenAI’s Ben Nimmo Leads the Fight Against Foreign Disinformation
An Editorial Extract and Review by CNC on Ben Nimmo—"This Threat Hunter Chases U.S. Foes Exploiting AI to Sway the Election"
As the United States approaches the 2024 presidential election, the intersection of artificial intelligence and election security has become increasingly critical. Ben Nimmo, the principal threat investigator at OpenAI, is at the forefront of efforts to counter foreign disinformation campaigns that leverage AI technologies. According to a report issued by The Washington Post, Nimmo has discovered that nations such as Russia, China, and Iran are experimenting with tools like ChatGPT to generate targeted social media content aimed at influencing American political opinion.
In a significant June briefing with national security officials, Nimmo's findings were so impactful that they meticulously highlighted and annotated key sections of his report. This reaction underscores the growing urgency surrounding AI-driven disinformation and its potential impact on democratic processes. While Nimmo characterises the current attempts by foreign adversaries as "amateurish and bumbling," there is a palpable concern that these actors may soon refine their tactics and expand operations to more effectively disseminate divisive rhetoric using AI.
A notable example from Nimmo's recent investigations involves an Iranian operation designed to increase polarisation within the United States. The campaign distributed long-form articles and social media posts on sensitive topics such as the Gaza conflict and U.S. policies toward Israel, aiming to manipulate public discourse and exacerbate societal divisions.
Nimmo's work has gained particular significance as other major tech companies reduce their efforts to combat disinformation. His contributions are viewed by colleagues and national security experts as essential resources, especially in the absence of broader industry initiatives. However, some peers express caution regarding potential corporate influences on the transparency of these disclosures. Darren Linvill, a professor at Clemson University, remarked that Nimmo
"has certain incentives to downplay the impact,"
suggesting that OpenAI's business interests might affect the extent of information shared.
Despite these concerns, OpenAI and Nimmo remain steadfast in their mission. Nimmo continues to focus on detecting and neutralising disinformation campaigns before they gain momentum, aiming to safeguard the integrity of the electoral process from foreign interference amplified by artificial intelligence. His efforts highlight the critical role of vigilant monitoring and proactive intervention in protecting democratic institutions in the age of AI-driven misinformation.
This week’s Cyber Pulse Mid-Week Briefings cover Australia’s new Cyber Security Bill, rising ransomware claims, Zscaler's AI-driven platform growth, and cyber threats from East Asia, including Chinese influence operations, North Korean tech theft, and costly global data breach claims.
Telegram is tightening its policies, sharing user IPs and phone numbers of criminals with authorities. As hybrid warfare blends state-backed hacking with cybercrime, Telegram faces pressure to curb illegal activities exploiting its encryption features.
BlackSuit Ransomware Strikes Again! The notorious hackers behind last year's Dallas attack have rebranded as BlackSuit, now demanding $500 million in ransoms! The FBI and CISA confirm the group's new identity, with aggressive tactics and enhanced methods to pressure victims into paying up.