From Telegram's Data Sharing to AI-Driven Election Interference: Unveiling Cyber Threats on Social Platforms

Cybercriminals and state-sponsored actors exploit social media for espionage and disinformation. Telegram is under fire for sharing data with Russia’s FSB, prompting Ukraine to restrict it. OpenAI's Ben Nimmo fights AI-driven disinformation targeting U.S. and European elections.

From Telegram's Data Sharing to AI-Driven Election Interference: Unveiling Cyber Threats on Social Platforms
Image: OpenAI Logo

CNC Cyber Pulse: Social Media Exploitation and AI-Driven Disinformation in Global Statecraft

In today's digital landscape, social media platforms have become pivotal arenas for both syndicated cybercriminals and state-sponsored espionage activities. Two recent developments underscore how these platforms are being leveraged to influence public opinion, disrupt democratic processes, and conduct covert operations. This analysis delves into the multifaceted role of social media in facilitating cybercrime and statecraft, as well as the emerging impact of artificial intelligence in these domains.

At a Glance


Telegram Under Fire: Russian Access Confirmed, Ukraine Responds with Platform Restrictions

Previously, CNC reported on significant shifts within Telegram following legal pressures faced by the platform's leadership. On September 23, 2024, Telegram announced a policy change to comply with valid legal requests, agreeing to share user IP addresses and phone numbers to enhance moderation and cooperation with authorities. This shift includes deploying an AI-supported moderation team and launching a bot for reporting illegal content, marking a substantial change in the platform's approach to user privacy and content management.

However, this new stance highlights Telegram's complex history with data sharing, especially in relation to Russia. Reports confirm that since 2018, Telegram has provided Russia’s Federal Security Service (FSB) with access to user data—a level of cooperation denied to Western authorities. Ukraine’s National Coordination Centre for Cybersecurity recently limited Telegram use in defence sectors, citing Russian intelligence exploitation. Former NSA Director Rob Joyce noted, 

“The idea that he [Durov] could come and go while defying Russia is inconceivable,” 

emphasising the geopolitical nuances of Telegram’s data-sharing practices.

The fallout has been swift in underground circles, where discussions on forums such as Exploit and Cracked reveal a strong push for alternative platforms. Nearly every major forum, from Exploit to Cracked, has opened threads to discuss migration options, with many users advocating for platforms such as Jabber, Tox, Matrix, Signal, and Session.


Source: YouTube, CNN, Ben Nimmo

AI and Election Security: OpenAI’s Ben Nimmo Leads the Fight Against Foreign Disinformation

An Editorial Extract and Review by CNC on Ben Nimmo—"This Threat Hunter Chases U.S. Foes Exploiting AI to Sway the Election"

As the United States approaches the 2024 presidential election, the intersection of artificial intelligence and election security has become increasingly critical. Ben Nimmo, the principal threat investigator at OpenAI, is at the forefront of efforts to counter foreign disinformation campaigns that leverage AI technologies. According to a report issued by The Washington Post, Nimmo has discovered that nations such as Russia, China, and Iran are experimenting with tools like ChatGPT to generate targeted social media content aimed at influencing American political opinion.

In a significant June briefing with national security officials, Nimmo's findings were so impactful that they meticulously highlighted and annotated key sections of his report. This reaction underscores the growing urgency surrounding AI-driven disinformation and its potential impact on democratic processes. While Nimmo characterises the current attempts by foreign adversaries as "amateurish and bumbling," there is a palpable concern that these actors may soon refine their tactics and expand operations to more effectively disseminate divisive rhetoric using AI.

A notable example from Nimmo's recent investigations involves an Iranian operation designed to increase polarisation within the United States. The campaign distributed long-form articles and social media posts on sensitive topics such as the Gaza conflict and U.S. policies toward Israel, aiming to manipulate public discourse and exacerbate societal divisions.

Nimmo's work has gained particular significance as other major tech companies reduce their efforts to combat disinformation. His contributions are viewed by colleagues and national security experts as essential resources, especially in the absence of broader industry initiatives. However, some peers express caution regarding potential corporate influences on the transparency of these disclosures. Darren Linvill, a professor at Clemson University, remarked that Nimmo 

"has certain incentives to downplay the impact,"

suggesting that OpenAI's business interests might affect the extent of information shared.

Despite these concerns, OpenAI and Nimmo remain steadfast in their mission. Nimmo continues to focus on detecting and neutralising disinformation campaigns before they gain momentum, aiming to safeguard the integrity of the electoral process from foreign interference amplified by artificial intelligence. His efforts highlight the critical role of vigilant monitoring and proactive intervention in protecting democratic institutions in the age of AI-driven misinformation.

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to Cyber News Centre.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.