Access Denied

This page requires users to be logged in and using a specific plan to access its content.

If you believe this is an error or need help, please contact
support@cybernewscentre.com


Login or Sign Up
⭠ Back
In 2024, as artificial intelligence continues to advance at an unprecedented pace, society is grappling with the darker potentials of this technology.
Image: Deepfakes of Boris Johnson and Jeremy Corbyn created for educational purposes by 'Future Advocacy' on YouTube.
Copy Page Link
Thomas Ricardo - Cyber Analyst Reporter
June 5, 2024

https://www.cybernewscentre.com/plus-content/content/deepfakes-misinformation-and-the-new-era-of-cybersecurity

You have viewed 0 of your 5 complimentary articles this month.
You have viewed all 5 of your 5 complimentary articles this month.
This content is only available to subscribers. Click here for non-subscriber content.
Sign up for free to access more articles and additional features.
Create your free account
follow this story

The Rise of AI-Generated Disinformation

In 2024, as artificial intelligence continues to advance at an unprecedented pace, society is grappling with the darker potentials of this technology.

The emergence of undetectable deepfakes and sophisticated misinformation campaigns threatens not only elections in the United States but also the social fabric and stability of democracies worldwide. 

As we confront these challenges, the critical question arises: How do we protect unique identities and preserve trust in our institutions?

This dilemma signals either a descent into cybersecurity chaos or an opportunity for innovation and collaboration to establish ethical guidelines and robust defences.

AI Enables High Volume of Engaging Content for Monetary Gain

AI tools like text and image generators allow spammers to produce large volumes of visually appealing and engaging content cheaply and quickly.

This AI-generated content draws attention and interactions (likes, comments, shares) from users, signalling to social media algorithms to promote it further.

The engaging AI posts often contain links or lead to external websites filled with ads, allowing spammers to generate ad revenue from the traffic.

Some spammers use AI images to grab attention, then comment with spam links on those posts. The ultimate goal is to drive traffic to these ad-laden websites or promote dubious products/services for profit. 

This same content can be directed towards the election process and fake websites containing photos, videos, and content to manipulate hearts and minds on why and who they should vote for.

Circumventing Detection Spreading Misinformation

 

AI allows spammers to generate unique content at scale, making it harder for platforms to detect patterns and filter out spam. As AI language models improve, the generated content becomes more human-like, further evading detection.

Profit drives much of the spam on social media, but AI-generated spam has an even darker side: the spread of misinformation and false narratives.

Automated AI bots can amplify these campaigns by flooding platforms with synthetic content, making it harder to detect and counteract.

AI equips spammers with the tools to create deceptive, viral content that can evade detection, all while they profit through dubious means such as ad farms, product promotions, or even spreading misinformation during election campaigns.

This weaponisation of AI in spreading misinformation effectively "socialises" election manipulation. Over the years, we have come to trust what we see and read, which makes us more susceptible to falling into the rabbit hole of fabricated realities.

Deepfake videos, audio clips, and synthetic images are particularly concerning as they can be used to spread false information about political figures and events. 

President Joe Biden's Magical Pistachio Story (Deepfake AI). Source: Marshall Artist on YouTube.

For example, a recent incident in New Hampshire involved a deepfake audio clip of President Joe Biden, falsely portraying him making controversial statements just weeks before an election.

Or in a more comedic (but no less scary) deep fake video where President Biden tells a story about his magical pistachio.

Such cases underscore the ease with which AI can be weaponized to mislead voters and manipulate public opinion.

The Threat Landscape

The potential for AI to disrupt elections is both significant and deeply concerning. A recent Elon University poll revealed that 73 percent of Americans believe AI will be used to manipulate social media and influence the election.

"And they're not confident at all in their ability to detect fake information," Elon poll director Jason Husser said, of the people surveyed. "

They're even less confident in the ability of other voters to detect fake information.”

Furthermore, 70 percent think fake information generated by AI will affect the electoral process, and 62 percent anticipate targeted AI campaigns designed to dissuade voters.

Overall, 78 percent of respondents expect at least one form of AI abuse to impact the election, with over half believing that all three forms—social media manipulation, fake news, and voter suppression—are likely to occur.

These findings reveal a high level of public awareness and concern about AI's potential misuse.

However, this awareness might also play a role in mitigating the impact of such tactics. Despite the heightened vigilance, 69 percent of those surveyed expressed doubts about the average voter’s ability to detect fake media, and 52 percent lacked confidence in their own ability to discern deepfakes.

This suggests that while anticipation of AI-driven misinformation may provide some level of protection, the general public remains vulnerable to sophisticated AI deceptions.

The New Era of AI powered Cybersecurity

As the threat of AI-driven misinformation looms large, it presents a pivotal moment for cybersecurity professionals.

The task at hand is not just to develop defences against these new threats but to establish ethical guidelines that govern AI's use.

This is an opportunity to innovate and create strategic approaches to cyber intelligence and media detection technology.

One promising avenue is the development of digital geospatial signature identity innovations. These technologies can offer robust methods to verify the authenticity of digital content, providing a screening against deepfakes and other forms of misinformation.

By leveraging bio-digital security measures and enhancing collaboration between citizens and government, we can create a more secure digital landscape.

Combating Deepfake Videos

While digital signatures are effective for documents and images, they face challenges with videos due to the various formats and the complexity of video data.

However, technologies like Archangel, which uses blockchain and neural networks, have been developed to create a smart archive of original videos. This allows for the verification of video content against the original, rejecting any tampered or edited versions.

Camera manufacturers like Nikon, Sony, and Canon are developing systems to embed tamper-resistant digital signatures directly into images at the time of capture.

These signatures include metadata such as timestamps, location, and the photographer's name, making it easier to distinguish genuine photographs from deepfakes.

Blockchain And Decentralised Identities

Blockchain technology can enhance the verification process by providing an immutable ledger for digital signatures.

This ensures that the provenance and integrity of digital content can be verified universally, regardless of geographical or jurisdictional boundaries.

Decentralised identity solutions on blockchain can further strengthen this by providing a secure and scalable way to verify digital identities and content authenticity.

Industry Collaboration And Standards

The collaboration among industry giants and the development of standards for digital signatures and content verification tools, such as the Verify platform, are crucial steps in combating deepfakes.

These initiatives aim to create a unified approach to verifying the authenticity of digital content, making it more difficult for deepfakes to proliferate.

Moreover, the integration of AI into cybersecurity can itself become a powerful tool in the fight against AI-driven threats.

As AI technology advances, so too does its potential for misuse, particularly in the realm of deepfakes and sophisticated misinformation campaigns.

These threats not only undermine the integrity of democratic processes but also erode the social trust that is foundational to our societies.

The battle against AI-driven misinformation is not just a technological arms race; it is a critical ethical and societal issue. While AI can be weaponized to deceive and manipulate, it also offers powerful tools for defence.

Innovations in digital signatures, blockchain technology, and decentralised identities provide promising avenues to verify the authenticity of digital content and safeguard against deep fakes.

The Rise of AI-Generated Disinformation

In 2024, as artificial intelligence continues to advance at an unprecedented pace, society is grappling with the darker potentials of this technology.

The emergence of undetectable deepfakes and sophisticated misinformation campaigns threatens not only elections in the United States but also the social fabric and stability of democracies worldwide. 

As we confront these challenges, the critical question arises: How do we protect unique identities and preserve trust in our institutions?

This dilemma signals either a descent into cybersecurity chaos or an opportunity for innovation and collaboration to establish ethical guidelines and robust defences.

Get access to more articles for free.
Create your free account
More Cyber News