AI continues to evolve, additional risks—both real and perceived—may emerge. Recent advancements, such as large language models (LLMs) and generative AI, are expected to significantly influence fields like science, governance, crime, entertainment, warfare, industry, and management.
However, policymakers focused on AI governance must maintain a balanced perspective. Despite the surrounding hype, AI holds immense potential to enhance lives globally and should not be dismissed solely due to hypothetical risks.
The goal of AI governance is to maximize its benefits while mitigating anticipated harms. This can be achieved through a combination of statutory regulations, institutional frameworks, technical standards, economic incentives, and ethical guidelines—each contributing to a complex governance landscape.
Lessons from Internet Governance
In 2021, we introduced the concept of Four Internets, outlining distinct governance models shaped by geopolitical and ideological factors. These models, though not mutually exclusive, define how the internet functions worldwide:
- The Open Internet – A decentralized, collaborative, and transparent space with anonymity and free-flowing information.
- The Bourgeois Internet – A regulated environment that upholds rights and civil discourse.
- The Paternal Internet – A controlled space where specific content, such as political speech or explicit material, is restricted.
- The Commercial Internet – A market-driven model that treats the internet as private property to incentivize economic solutions.
Additionally, a fifth model, influenced by the hacker ethic, challenges authority by exploiting and undermining established systems.
These internet governance models remain relevant today—not just for managing the internet but also for shaping AI governance. AI relies heavily on the internet for data, cloud computing, and accessibility. Unsurprisingly, major internet companies are also leading the generative AI revolution.
A Framework for AI Governance
As AI governance develops, the focus is primarily on tangible risks rather than speculative existential threats. Governments have responded differently: China enforces strict regulations, the EU adopts a moderate stance, the UK and US have minimal legislation, while countries like Japan and the UAE aim to foster AI innovation with fewer constraints. New regulatory bodies, including the EU’s AI Office and the UK’s AI Safety Institute, have been established, alongside global cooperation efforts like the UN AI Advisory Body and the G7’s Hiroshima Process. Over time, a structured AI governance framework will emerge, integrating regulations, global policies, research, and best practices.
Building on the Four Internets model, we introduce Five Artificial Intelligence Management Strategies (AIMS) to classify AI governance approaches:
- Open AIMS – Advocating for collaborative, open innovation for societal benefit.
- Bourgeois AIMS – Ensuring AI’s potential is realized while safeguarding rights and ethical standards.
- Paternal AIMS – Imposing limitations on AI applications to prevent harmful outcomes.
- Commercial AIMS – Allowing markets and investors to shape AI’s future based on profitability.
- Hacking AIMS – Leveraging AI to challenge authority and disrupt existing structures.
Key Considerations for AI Governance
Identifying an AI management strategy is crucial in addressing several pressing questions:
-
Is AI the root cause of a given problem? AI can amplify issues like disinformation, but misinformation existed long before AI. Over-regulating AI itself, rather than addressing the broader issue, may not be effective.
-
Which type of AI is responsible? AI technology is rapidly evolving. Focusing regulations too narrowly on current generative AI models may overlook future developments.
-
What should be regulated? The AI ecosystem consists of multiple components—applications, underlying models, development technologies, infrastructure, and training data. Effective governance must determine which elements require oversight and how enforcement can be ensured.
Each AIMS approach prioritizes different aspects: Paternal AIMS focus on restricting harmful applications, Bourgeois AIMS regulate AI development, Open AIMS promote AI for social good, Commercial AIMS prioritize profit-driven innovation, and Hacking AIMS seek to disrupt authority.
The objectives of AI governance are often ambiguous, and regulatory efforts may sometimes be reactive rather than strategic. Given that generative AI is still in its infancy, ill-conceived policies could hinder the development of AI-driven solutions for real-world challenges. Could privacy regulations prevent the ethical use of personal data for public benefit? Might risk aversion slow AI progress in medicine? Could concerns over AI’s opacity limit its potential in administration?
Moreover, excessive regulation in risk-sensitive regions may stifle innovation, impeding technological progress and restricting AI’s accessibility beyond wealthier nations.
Defining our AIMS clearly is a crucial first step in crafting effective AI governance strategies that balance innovation with responsibility.