Access Denied

This page requires users to be logged in and using a specific plan to access its content.

If you believe this is an error or need help, please contact
support@cybernewscentre.com


Login or Sign Up
⭠ Back
As AI accelerates, platforms like Grok expose vulnerabilities in governance. Billionaires like Elon Musk and Sam Altman, leading trillion-dollar companies, now wield power rivalling nations. Without strong safeguards, AI risks overshadowing ethical considerations, urging immediate action to ensure it serves the public good.
Source: YouTube Bloomberg Originals 2023 & Lex Fridman Podcast 2024
Copy Page Link
Editor Alexis Pinto
AI Diplomat Team
September 2, 2024

https://www.cybernewscentre.com/plus-content/content/the-digital-moral-challenge-the-power-of-titans-shaping-our-future

You have viewed 0 of your 5 complimentary articles this month.
You have viewed all 5 of your 5 complimentary articles this month.
This content is only available to subscribers. Click here for non-subscriber content.
Sign up for free to access more articles and additional features.
Create your free account
follow this story

This year has starkly revealed the acceleration and widespread impact of AI, exposing the troubling influence of social platforms and pushing the boundaries of technological advancement at a dangerously rapid pace. The rise and rivalry of AI—epitomised by platforms with Grok2 and Open AI ChatGPT—has unveiled a deep vulnerability in Western governments, particularly in the United States, where the speed of innovation is reshaping the world at an astonishing rate.

While these advancements bring substantial benefits in fields such as science, space technology, and microbiology, they also pose significant risks. The growing power of trillion-dollar companies and influential billionaires like Elon Musk is increasingly concerning, as their combined revenues now rival the GDP of nations like Australia and South Korea. These tech giants are not only matching the economic might of G20 countries but are also rapidly extending their influence across Western nations.

Unchecked AI Acceleration: Legal Battles and Government Pushback

Many governments are increasingly troubled by, and often seemingly incapable of, containing the effects of the hyperscalers and the unchecked spread of AI-driven platforms and content. The release of Grok2, for instance, has underscored the dangers of AI operating without adequate oversight, raising alarms about the societal and political impacts of these technologies. The unchecked power and influence of tech behemoths like Elon Musk pose serious challenges to democratic governance, as traditional regulatory frameworks struggle to keep pace with the relentless expansion of AI and its disruptive potential.

A recent legal battle in Brazil highlights one of the most pressing issues of our time: the growing tension between government authority and the unchecked influence of tech giants. Last Friday, Brazil's Supreme Court ordered the suspension of Elon Musk’s "X" platform after it defied orders to suspend accounts spreading misinformation. Justice Alexandre de Moraes, who has been in a standoff with Musk since April, mandated the "immediate, complete, and total suspension of X’s operations" in Brazil until all court orders are complied with, fines are paid, and a new legal representative is appointed. This unprecedented move could signal a broader shift as governments worldwide begin to assert their authority over powerful tech moguls who have operated with minimal oversight for too long.

The legal conflict began when Justice de Moraes ordered the suspension of numerous accounts on X for allegedly spreading disinformation—an action Musk condemned as censorship. Despite the court's ruling, Musk refused to comply, leading the court to order internet service providers to block access to the platform across Brazil. Media commentators speculate that this defiance could have severe personal consequences for Musk, including potential arrest warrants, which could impact his broader ambitions, such as the expansion of Starlink's satellite internet service in Brazil. Musk's response has been to hurl insults, calling de Moraes a "tyrant" and a "dictator."

This legal confrontation is emblematic of a broader and more profound struggle between government authority and the unchecked power of billionaires like Musk. It raises critical questions about how democratic institutions can maintain their authority and enforce justice when faced with the immense economic and political clout of trillion-dollar private institutions. President Luiz Inácio Lula da Silva underscored this tension, stating, “Just because the guy [Musk] has a lot of money, doesn’t mean he can disrespect you … Who does he think he is?”

This case represents one of the largest and most significant international confrontations between a national government and a tech billionaire, setting a precedent that could influence similar battles in Western nations. However, it's worth noting that non-Western governments have already taken decisive actions against these tech behemoths. China banned X (formerly Twitter) back in 2009, along with Facebook, to assert control over the digital landscape. Similarly, in Russia, the government expanded its crackdown on dissent and free media following the invasion of Ukraine in 2022. Russian authorities have blocked access to Twitter, Facebook, Instagram, and other independent media outlets critical of the Kremlin.

These developments point to a growing global resistance against the unchecked power of tech billionaires, with governments increasingly willing to challenge the dominance of these private institutions. The ongoing battle in Brazil could mark the beginning of a broader movement where governments, particularly in Western nations, may finally confront the tech giants that have long operated with impunity. 

The Rapid Pace of AI and the Growing Divide

This concentration of power is not without serious consequences. The rapid deployment of AI technologies, often with minimal safeguards, is forging a digital reality that seems to prioritise innovation over ethical considerations. Figures like Elon Musk, who are pushing the boundaries of AI with platforms like Grok, are moving at such a pace that society is struggling to keep up. The lack of robust safety measures raises profound concerns about the potential for these technologies to cause harm, both domestically and globally.

Source: YouTube Bloomberg Originals

Eric Schmidt, industry veteran and former CEO of Google, have voiced his concerns about the growing divide between the tech giants and everyone else. In a presentation at Stanford earlier this month, Schmidt highlighted the widening gap between the frontier AI models controlled by a few large companies and the rest of the field. He noted that these companies require investments of $10 billion, $20 billion and possibly more than $50 billion to maintain their lead. Schmidt also pointed out that the energy demands of these AI systems are so massive that even the U.S. government is unprepared to meet them, signalling a shift in power dynamics that could realign global order to maintain technological edge.

Schmidt has not been alone in raising the alarm. He emphasised the need for strong guardrails and regulatory frameworks to prevent the misuse of AI. In a recent interview organised by the Nuclear Threat Initiative, he expressed his concerns about safety, warning that "we will see the possibility of extreme risk events" in the next decade. Schmidt explained that the extraordinary power of AI systems, coupled with our limited understanding of their full capabilities, presents inherent risks—particularly when these systems develop new, unforeseen abilities. He also warned of the potential for malicious actors to exploit open-source AI models, potentially creating harmful applications such as synthetic pathogens.

Adding to these concerns, Elon Musk has openly challenged the current direction of AI development, contrasting his approach with that of OpenAI’s Sam Altman. Musk has criticised OpenAI for what he sees as its overly cautious stance, instead advocating for a more "rebellious" AI that, in his view, pushes the boundaries of what is possible. However, this approach has drawn criticism for its potential to exacerbate the very risks Schmidt and others have warned about.

Schmidt, reflecting on conversations with Sam Altman, noted that Altman estimates it will take about $300 billion to fully realise the potential of AI, a figure that underscores the massive scale of resources needed to stay competitive. Schmidt also calculated the immense energy requirements involved, suggesting that even the U.S. government is not fully prepared for the power demands that AI will impose. This highlights the fact that only a handful of tech giants and billionaires have the capital and political influence to navigate these challenges, raising questions about the broader implications for global governance and democratic oversight.

To address these growing risks, Schmidt advocates for comprehensive governance structures, including AI-powered threat detection and response systems, as well as international agreements. He suggests starting with a "no-surprise" treaty to ensure transparency in AI development, as secretive testing could inadvertently trigger dangerous reactions. Schmidt also stressed the importance of building a "human trust framework," acknowledging that this will be an extremely difficult task but one that is crucial for the future of AI.

Source: Youtube, Bloomberg Television

The Rising Dominance and Unchecked Influence of The Magnificent Seven Tech Titans

Another powerful figure and proponent of AI change in society is NVIDIA CEO Jensen Huang, who leads a corporate empire with a market capitalisation of $3.02 trillion, aptly stated that we have entered a "New Era of Accelerated Computing." Others might call it an era of accelerated learning. Mark Zuckerberg recently highlighted significant developments with Meta's Llama series. Meta has launched Llama 3.1, which includes their most advanced model yet, the Llama 3.1 405B, with 405 billion parameters. This model is designed to rival top AI models like OpenAI's GPT-4 and Google's Claude 3.5 Sonnet. Llama 3.1 is fully open-source and can be downloaded and run on various cloud services, marking a significant step in Meta's strategy to democratise AI.

Looking ahead, Zuckerberg revealed that Meta is already planning for Llama 4, which will require ten times the computing power used for Llama 3. The company is investing heavily in infrastructure to support this next leap, emphasising their commitment to leading in the open-source AI space​

The benefits of AI are undeniable, but they are equally matched by the potential for sinister outcomes, whether deliberate or accidental.

Yet, while acceleration with vision can propel us to new heights, acceleration without careful consideration—without the necessary "hand brakes"—can lead to catastrophic derailment.

The inherent risk of the digital economy lies in the concentration of power in the hands of a select few. These technologies have the potential to impact billions of lives, much like the cyberattacks that have neutralised hundreds of millions of systems and compromised billions of records during the Biden administration alone.

These events have been a sobering reminder of how slow and unwieldy the industrial bureaucracy of the U.S. government has become, unable to keep pace with the rapid technological advancements that are reshaping our world. The evidence is clear in how Silicon Valley and the capital markets have repeatedly underestimated the risks of deploying what could become a weaponised digital "Trojan Horse." This unchecked technological expansion has infiltrated every aspect of society, invading private thoughts and public discourse alike. It acts as an omnipresent influencer in our digital world, capable of altering social behaviours and potentially leading to unpredictable and dangerous outcomes.

The so-called best minds in Washington, who are tasked with safeguarding the interests of the public, seem increasingly outmatched by the new billionaires of this century. The U.S. government has lost its leading position, struggling to keep up with the speed and complexity of AI’s development and the aggressive angle of attack with which these technologies are being adopted globally. The inability to forecast the transformative nature of cloud computing and now artificial intelligence is emblematic of this failure. 

Source: YouTube, Bloomberg Originals

Tech Giants Command Markets and Influence Billions

The mega-cap giants, often referred to as the "Magnificent Seven", have not only outpaced the stock market but have also fundamentally transformed the global economy. By June 2024, their combined market capitalisation skyrocketed to an astonishing $16 trillion, up from $11.79 trillion in January 2024. This staggering figure dwarfs the nearly $3 trillion market cap of the Russell 2000® Index, which consists of 2,000 small-cap stocks, highlighting the seismic shifts these companies—Apple, Microsoft, Alphabet, Amazon, Nvidia, Meta Platforms, and Tesla—are driving in the financial landscape.

The influence of these tech giants extends far beyond the borders of the United States, impacting the global economy in unprecedented ways. In 2023 alone, they were responsible for nearly 40% of the 22.8% total return in the MSCI ACWI Index, a clear indication of their pivotal role in shaping the future of global markets and economies.

These companies are led by some of the wealthiest and most influential individuals in history. Mark Zuckerberg of Meta, with a personal fortune of $180 billion, Jeff Bezos, commanding $196.2 billion, and Elon Musk, whose wealth has reached an eye-watering $243 billion, are at the forefront of this technological revolution. Their platforms, which collectively reach billions of users worldwide, have resulted in an extraordinary concentration of power that influences not only global markets but also social and political dynamics on a massive scale.

As these giants continue to amass wealth and expand their influence, particularly through cutting-edge AI infrastructure developments spearheaded by leaders like NVIDIA's Jensen Huang, there is a palpable sense of both immense opportunity and significant risk. In just two decades, their collective fortunes have soared to a staggering $625.8 billion, and they are on the brink of collectively reaching the trillion-dollar mark. This convergence of capital, technology, and political influence is driving relentless expansion into cloud computing and AI technology at an accelerating pace, turning these advancements into a billionaire's spinning wheel of fortune.

Index weight of the top 7 companies in the S&P 500® has grown over time. - Bloomberg, as of December 29, 2023.

However, with this rapid expansion comes the potential for unintended consequences. Every digital device, every interconnected machine, and every user is now part of this rapidly expanding "AI Internet of Everything." While the technological advancements being made are groundbreaking, they also raise critical questions about control, oversight, and the broader implications for society. Can these monumental changes be effectively regulated, or has the momentum become too powerful for any governing body to manage?

Source: YouTube 2016

Consider the implications of what lies on the horizon, especially as we approach the era of synthetic intelligence-powered robots and Artificial General Intelligence (AGI). The key players—Elon Musk, Sam Altman, and other tech billionaires from the hyperscale companies—are poised to wield unprecedented influence over the trajectory of Western society over the next decade, and likely beyond. The question we must grapple with is: What happens when they do?

Is now the moment to close the barn doors, or should we instead embrace a measured approach, exercising moderation, caution, and patience? Society and its leaders need time to acclimate to this new reality—one where artificial (and potentially superintelligent) entities become integral to daily life and work. If we rush headlong into this future, we risk destabilising society, exacerbating uncertainties, and increasing fragility in ways we may not be fully prepared to manage.

The commercial and social power currently exercised by these same billionaires (who run ‘X’, Amazon, Meta and Google) can demonstrate, in principle, what AI might do to society at scale. These platforms collectively reach around half of the world’s population every day – it’s through them that public debate is framed, where elections are won and lost, viewers drowned in fake news and misinformation campaigns spanning the full spectrum from private and anonymous interest groups to those directly controlled by nation states, whose effects on our culture and marketplaces today are incalculably greater, and inherently more volatile, than ever before.

But beneath these systems lie, to our eyes, formless and undifferentiated terraforma, the power centres that are gathering the energy and resources to effect all of this – the vastish networks of integrated cloud infrastructure and data centres, the impossible algorithms, the data-streams, the encryption, at every scale: light-year-wide, citywide, street-by-street, device-by-device, and the spectrum of interactions that link them all. 

Some of that underlying infrastructure is owned, used or even developed by other billionaires, or close collaborators or hangers-on of theirs, or hired contractors operating in secret. But regardless of how it was produced, all of it is developed and run behind closed doors, where virtually no one can interfere, much less regulate the outcome, in all but name. We cannot see whether they are safe or not.

Such unchecked power, in so few hands, must be addressed immediately: strong regulations, global coordination and robust commitments to ethically grounded AI development are essential, if these technologies are to be used for benefit to the public rather than for unchecked power. Specific and trackable steps as independent review of systems running AI to ensure that technologies are used in ways consistent with values that don’t perpetuate inequality, or further erode democratic institutions. 

At a time of unprecedented acceleration and volatility, with international strategic competition heightening global tensions and casting long shadows over political and geopolitical stability, we are also approaching a critical juncture in the United States—a country entering an election year. In this context, social platforms have become weaponized, dominating information flows and shaping public perception in ways we have never seen before. The rapid unleashing of artificial intelligence, particularly since the release of ChatGPT to the public on November 30, 2022, has only compounded these challenges, transforming the landscape in less than two years.

In such a fraught environment, careful deliberation and open conversation are more crucial than ever. The world stands on the brink of a technological revolution, and we must approach this next leap with both eyes open to the profound risks and rewards it presents. The stakes are extraordinarily high—not just for the future management of AI, but for the trajectory of our digital societies over the coming decades, and perhaps even the next century.

As AI accelerates, platforms like Grok expose vulnerabilities in governance. Billionaires like Elon Musk and Sam Altman, leading trillion-dollar companies, now wield power rivalling nations. Without strong safeguards, AI risks overshadowing ethical considerations, urging immediate action to ensure it serves the public good.

Get access to more articles for free.
Create your free account
More Cyber News