Access Denied

This page requires users to be logged in and using a specific plan to access its content.

If you believe this is an error or need help, please contact
support@cybernewscentre.com


Login or Sign Up
⭠ Back
The 1970s oil crisis revealed the dangers of reliance on a few suppliers, a scenario now echoed in the control of AI by tech giants. Like OPEC’s hold on oil, companies like Google and Meta dominate AI, raising concerns about economic dependence and geopolitical power imbalances.
Copy Page Link
Editor Alexis Pinto
AI Diplomat Team
September 14, 2024

https://www.cybernewscentre.com/plus-content/content/the-new-oil-ai-and-the-geopolitical-consequences-of-technological-control

You have viewed 0 of your 5 complimentary articles this month.
You have viewed all 5 of your 5 complimentary articles this month.
This content is only available to subscribers. Click here for non-subscriber content.
Sign up for free to access more articles and additional features.
Create your free account
follow this story

In 1979, gas customers at the Texaco station on Industrial Blvd. and Reunion had to wait nearly an hour to fill up. With five lines of around 15 cars each, the long wait times were a common sight. (The Dallas Morning News)

The oil crises of the 1970s, sparked by OPEC’s strategic manipulation of oil supplies, revealed the vulnerabilities of global dependence on a critical resource. When OPEC, led by Arab nations, imposed an oil embargo against the United States and its allies in 1973, it triggered a wave of inflation, economic stagnation, and political upheaval that forced Western nations to rethink their energy policies and alliances.

Today, the control of advanced artificial intelligence (AI) technologies by a handful of tech giants presents a parallel scenario, with equally profound geopolitical consequences. The power concentrated in these few companies has the potential to reshape global stability, economic dependency, and the very fabric of international relations.

Much like OPEC’s dominance over oil, today’s tech titans—such as Google, Meta, and those led by figures like Elon Musk, Mark Zuckerberg, and former Amazon CEO Jeff Bezos—hold immense sway over AI development and advanced computing technologies. This concentration of technological power raises significant concerns about the potential for cartel-like behavior, where AI hyperscalers could dictate terms to both democratic and autocratic nations alike.

The geopolitical implications are vast, particularly as countries with less stringent ethical standards and regulatory guardrails, including autocratic regimes, seize this opportunity to advance their own AI agendas. This shift could invite cutting-edge developers to experiment in environments with minimal oversight, potentially leading to the emergence of Artificial General Intelligence (AGI) in regions like China, the Gulf States, Russia, Iran, or even North Korea.

Geopolitical Realignments and the Rise of AI Cartels

The AI landscape today is reminiscent of the 1970s’ oil crisis, where global alliances were tested, and national interests diverged under economic stress. The United States, heavily reliant on foreign oil during that period, found its geopolitical strategies compromised, leading to a realignment of foreign policies and the creation of strategic petroleum reserves. In the current context, the U.S. and its allies find themselves increasingly dependent on a few tech giants that control the most advanced AI technologies. This dependency introduces vulnerabilities that can be exploited economically and geopolitically.

The rise of AI hyperscalers has created a new form of competition, where the struggle for technological supremacy could lead to cartel-like behavior among these companies. This scenario is exacerbated by autocratic nations that are willing to bypass ethical considerations and regulatory frameworks to gain an edge in AI development. Countries such as China, the UAE, and Russia are already investing heavily in AI superclusters, with the potential to attract developers eager to push the boundaries of AI without the constraints imposed by democratic nations. This could lead to the development of AGI outside the traditional G7 nations, where regulatory oversight is more rigorous, potentially giving these nations a significant strategic advantage.

This environment of unchecked AI development is not just a theoretical concern. The concentration of AI power in the hands of a few companies and nations could lead to a new era of digital colonialism, where economic and political influence is wielded through control over AI technologies. As seen with OPEC’s influence over oil, the tech giants’ control over AI could force democratic nations to rally against the growing power of these companies, which continues to expand geometrically. This situation is the byproduct of unfettered capitalism, where private companies gain expansive power that governments now must contend with as a potential international threat.

Senator Scott Wiener (D-San Francisco). Source: YouTube, Fox40 News

Navigating The AI Regulatory Maze: California’s SB 1047 In The Spotlight

In response to the growing influence of AI hyperscalers, governments are grappling with the need to establish regulatory frameworks that can curb this immense power while still fostering innovation. California’s SB 1047 has emerged as a focal point in this debate, representing a critical juncture in the AI revolution. On August 28, 2024, this landmark AI safety bill, authored by Senator Scott Wiener (D-San Francisco), passed the State Assembly and Senate, positioning California at the forefront of AI governance with global implications. The bill introduces the nation’s first safeguards designed to prevent AI from being weaponized for cyberattacks on critical infrastructure, the development of chemical, nuclear, or biological weapons, and the facilitation of AI-driven automated crime. With Governor Gavin Newsom's signature now pending, SB 1047 is poised to set a precedent that could reshape AI regulation across the United States and beyond.

However, this regulatory push has raised significant concerns within the tech industry. Critics argue that SB 1047 could drive AI companies out of California, particularly affecting Silicon Valley startups. Industry voices warn that such legislation represents regulatory overreach, imposing burdensome requirements that could stifle innovation and push investment out of the state. The fear is that overly cautious regulations could deter future experimentation or lead to the reappropriation of AI development efforts to regions with less stringent oversight.

Strategic Oversight and Guardrails in the AI Era

The oil crises of the 1970s underscored the importance of strategic oversight and the need for robust regulatory frameworks to manage the geopolitical and economic risks associated with dependence on a critical resource. In response to these crises, the U.S. and its allies implemented a range of measures designed to mitigate the impact of future oil shocks, including the creation of strategic petroleum reserves and the promotion of energy efficiency. These efforts were not just about securing energy supplies but also about reducing the geopolitical leverage that OPEC could exert over the West.

In the context of AI, similar measures are urgently needed to prevent a handful of companies from wielding too much power over the global economy and political landscape. Recent efforts by governments to establish AI regulations—such as the recent passing of California's SB 1047 bill, President Biden’s AI Executive Order, the UK’s AI Principles, and the EU’s AI Act—represent important steps in this direction. These initiatives aim to align AI development with public interest, ensuring it is conducted transparently and ethically, and that it does not exacerbate inequalities or create new forms of dependency.

However, these measures must go beyond symbolic gestures. They require strong enforcement mechanisms and a commitment to international cooperation. Just as the oil crises led to the creation of new international institutions and frameworks for managing energy supplies, the current AI landscape demands a similar level of strategic oversight. This could involve the creation of international agreements on AI development and use, the establishment of public AI infrastructure to reduce dependency on private companies, and the promotion of open-source AI technologies that are accessible to all nations.

Developed by Grok2, prompt: evolution of ai

The Stakes of the AI Revolution

The stakes in the AI revolution are extraordinarily high. If left unchecked, the concentration of AI power in the hands of a few could lead to a new era of geopolitical instability, where economic and political power is concentrated in a small number of tech empires. Autocratic nations, willing to bypass ethical constraints, could become new hubs of AI development, exacerbating global tensions and potentially leading to the creation of AGI in environments where regulation is lax or nonexistent.

The lessons of the 1970s are clear: We must act now to diversify our technological resources, establish strategic reserves, and create robust regulatory frameworks that can safeguard global stability. The AI revolution presents both an opportunity and a threat. The challenge for democratic nations is to navigate this complex landscape, ensuring that AI serves the global good rather than entrenching the power of a few. As we stand on the brink of a new era in AI, the question is not just about how we regulate this technology, but how we ensure it does not become the new oil, with all the geopolitical consequences that entails. The future of AI—and the future of global stability—depends on our ability to rise to this challenge.

The 1970s oil crisis revealed the dangers of reliance on a few suppliers, a scenario now echoed in the control of AI by tech giants. Like OPEC’s hold on oil, companies like Google and Meta dominate AI, raising concerns about economic dependence and geopolitical power imbalances.

Get access to more articles for free.
Create your free account
More Cyber News