Cyber Breaches and AI Deepfakes: How the 2024 US Elections Are Vulnerable to Misinformation

Chinese hackers allegedly breached U.S. telecoms tied to Harris and Trump campaigns, highlighting election security gaps. AI-driven deepfakes and disinformation also surge on social media, raising risks to democracy as voters near Election Day.

Cyber Breaches and AI Deepfakes: How the 2024 US Elections Are Vulnerable to Misinformation

At a Glance

Chinese Hackers Allegedly Breach U.S. Telecoms, Raising Election Security Concerns

Recent reports of alleged Chinese government-sponsored hackers infiltrating key U.S. telecom networks connected to both Kamala Harris’s and Donald Trump’s campaign communications signal a new level of threat to democratic systems. A group identified as Salt Typhoon allegedly breached the systems of major telecommunications companies, including AT&T, Verizon, and Lumen, potentially exposing sensitive campaign communications and wiretap operations. The FBI and Cybersecurity and Infrastructure Security Agency (CISA) swiftly responded, stating, 

“After the FBI identified specific malicious activity targeting the sector, the FBI and [CISA] immediately notified affected companies, rendered technical assistance, and rapidly shared information to assist other potential victims.” 

As the probe intensifies, these breaches highlight the vulnerability of electoral infrastructure and the growing risks that foreign interference poses to democratic nations.

Michael Kaiser, president and CEO of Defending Digital Campaigns (DDC), emphasized the stakes involved, stating, 

“Our personal devices are prime targets because they have the potential to reveal so much about us—including who we speak to, our travel and meeting plans, communications with key staffers and family members, and more.” 

With Election Day fast approaching, candidates and their teams face an unprecedented onslaught of cyber threats that jeopardize not only their privacy but also the broader democratic process.

AI Deepfakes and Disinformation as New Tools of Election Interference

In addition to traditional hacking, the U.S. election season is contending with a deluge of artificial intelligence (AI)-generated disinformation. AI-powered deepfake videos, often portraying manipulated or entirely fabricated scenarios, have become powerful weapons in the arsenals of foreign entities. 

Examples include an expletive-laden deepfake video of Joe Biden, a doctored image of Donald Trump being forcibly arrested, and a fabricated video of Kamala Harris casting doubt on her own competence—each designed to confuse and mislead the electorate. “These recent examples are highly representative of how deepfakes will be used in politics going forward,” said Lucas Hansen, co-founder of the nonprofit CivAI. 

“While AI-powered disinformation is certainly a concern, the most likely applications will be manufactured images and videos intended to provoke anger and worsen partisan tension.”

Arizona’s Secretary of State Adrian Fontes echoed concerns about AI’s role in voter manipulation, noting that 

“generative artificial intelligence and the ways that those might be used” 

represent a significant challenge for election officials. Arizona, among other battleground states, has taken proactive steps to prepare for such scenarios, with officials conducting tabletop exercises that envision Election Day disruptions fueled by AI-generated deepfakes.

The threat goes beyond mere confusion. In a politically polarized environment, these AI-powered manipulations are purpose-built to inflame existing divides, potentially influencing voter turnout. As Hansen demonstrated to reporters, an AI chatbot could be easily programmed to disseminate false claims about polling locations or election times, subtly steering certain voter groups away from the polls. This flood of deceptive content, often launched through anonymous social media accounts, adds yet another layer to the foreign interference puzzle.

The Multiplying Effect of AI on Social Media Platforms

The reach of AI disinformation is dramatically amplified by social media platforms, where algorithms can prioritize content based on engagement. This creates an environment ripe for exploitation by foreign entities, who seek to weaponize these algorithms to maximize the spread of false information. Russia and North Korea are reportedly leveraging social media as key vehicles for their disinformation campaigns, particularly targeting issues that polarize U.S. voters, such as immigration, healthcare, and race relations.

The multiplier effect created by AI and social media algorithms is of growing concern. In one example, Elon Musk shared a deepfake video of Kamala Harris to his 192 million followers on X (formerly Twitter), where Harris, in a fabricated voiceover, calls President Biden “senile” and expresses doubt about her ability to govern.

The video, devoid of any disclaimer, spread quickly, stoking anger and confusion. It was only after a backlash that Musk clarified the video was intended as satire. This incident demonstrates the extraordinary influence that high-profile individuals can have in spreading manipulated content, as well as the need for clearer guidelines to flag AI-altered media.

Source: X (Formerly Twitter)

The role of AI in disinformation is a complex problem that requires active management by tech companies. However, with many social media platforms scaling back on content moderation, there are serious concerns that these AI engines will continue to spread misinformation at a massive scale, ultimately influencing voter behaviour and fueling distrust in the electoral system.

Outlook: The Persistent Shadow of Cyber Threats to Democracy

As the November 5 election approaches, the stakes could not be higher. This moment could either mark the beginning of a new era of pervasive AI-driven disinformation, fake news, and fragmented democratic processes, or the election's aftermath could catalyze an even greater wave of AI-fueled misinformation campaigns and social media bot activity.

Will this be the new norm—a generation of amplified falsehoods and manipulated realities threatening the core of democracy? Or will the results of this election push these digital threats to unprecedented levels, leaving Western democracies to grapple with the fallout? The world waits, bracing for an answer.

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to Cyber News Centre.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.