What Global Development Can Teach Us About AI Governance
Part 2: Similarities, Differences, and Case Studies
I’ve been deep in the fascinating AI governance rabbit hole lately—reading policy papers, listening to podcasts, talking to people across sectors, and mapping out where the field is headed. What’s striking is how much this emerging space echoes international development. The players are different, but the patterns are familiar: powerful actors setting the agenda, global frameworks with limited traction on the ground, and a whole lot of talk about participation with very limited meaningful redistribution of power.
In this piece, I want to draw out five key similarities and four structural differences between AI governance and international development—and then reflect on how case studies in food safety, climate change, nuclear non-proliferation, and pandemic response offer insight into what more adaptive, globally coordinated governance could look like.
Familiar Patterns, Different Tech
1. Asymmetrical Power and Performative Participation
In development, I often see donor interests often override local needs. Even when community input is sought, it’s usually after major decisions are already made.
This is frustrating to watch and AI governance is falling into the same trap. A handful of labs and governments—OpenAI, Anthropic, DeepMind, the EU, the U.S.—are setting the norms. Low- and middle-income countries, as well as civil society organizations, are left reacting. The term “participation” gets thrown around a lot, but much of it is tokenistic. When only a select few get to define risk and propose mitigation strategies, the rules aren’t democratic—they’re imposed1.
2. Global Frameworks, Local Realities
Development has always wrestled with the gap between global ambition and local implementation. Take the Sustainable Development Goals (SDGs)—a set of 17 global goals adopted by all UN Member States in 2015 to end poverty, protect the planet, and promote peace. They sound great, but they don’t mean much without local capacity and political buy-in.
We’re seeing the same with AI. International agreements like the EU AI Act2, the G7 Hiroshima Process, and the UN Global Digital Compact3 aim to create shared norms. But how these play out depends heavily on local context, legal systems, and enforcement capabilities. What counts as "high-risk AI" in Europe may be irrelevant—or impossible to regulate—in much of the Global South.
3. Capacity Gaps Undermine Governance
In development, policies only work when local implementers have the knowledge, tools, and autonomy to use them. That’s why capacity building has been a major focus (even if it’s often underfunded or poorly executed).
AI governance needs the same muscle. Many data protection authorities are under-resourced. In some countries, there’s one person trying to regulate all digital systems—with no real access to the companies deploying them4. If governance is just a compliance burden for the well-resourced, it won’t protect the rest.
Why AI Is Still Its Own Beast
Before we shift to global case studies, it’s worth pausing to acknowledge that AI isn’t just a new chapter of development—it introduces some very different dynamics.
1. Speed and Scale
Development moves slowly—by necessity. Behavioral change, institution building, infrastructure rollouts: these take time.
AI doesn’t care. Models are released overnight and deployment is global. The "move fast and break things" approach can be dangerous when we’re talking about a public good. Public sector procurement timelines can’t keep up with the tech cycle. That’s why we need agile, adaptive governance—something more like sprint cycles than five-year frameworks.
2. Opacity and Complexity
Development projects might be bureaucratic, but they’re usually legible. You can trace the money, interview participants, assess the outcomes.
AI is built on black boxes. Even developers struggle to explain model behavior. This creates new challenges for accountability and risk. We can’t regulate what we don’t understand—and sometimes, they don’t either.
3. Global but Unbounded
Development programs are tied to borders. They operate under national laws and within sovereign states.
AI ignores borders. A model trained in California can be deployed in Kenya, modified in Singapore, and embedded in a Dubai smart city without anyone really knowing. This makes enforcement murky and jurisdiction contested.
4. AI as an Actor, Not Just a Tool
Perhaps most radically, AI isn’t just a tool—it’s becoming a kind of actor. Models learn, influence decisions, and adapt to feedback. That’s new.
Development tools—vaccines, textbooks, roads—don’t have agency. AI does. It introduces a governance challenge we’ve never had before: how do you regulate something that updates itself?
Case Studies
There are several development and international governance challenges I’ve worked on or studied that offer particularly relevant lessons for AI governance: food safety, climate change, nuclear non-proliferation, and pandemic response.
Each of these offers a different window into how we’ve approached shared global risks before—and what we might do differently this time.
Case Study 1: What Global Food Safety Can Teach Us
Food safety is a useful analogy for AI governance because it involves managing invisible, complex risks that can spread quickly and globally—like pathogens in supply chains or hidden biases in algorithms. Both fields require a blend of technical expertise, international coordination, and adaptive regulation to protect public well-being. And in both cases, the consequences of failure can be widespread, unevenly distributed, and hard to trace back to a single point of origin.
Food safety is governed through a globally coordinated, tiered model that works surprisingly well, especially compared to AI’s current patchwork.
The Codex Alimentarius Commission, co-led by the World Health Organization (WHO) and the Food and Agriculture Organization (FAO), sets international food standards. It distinguishes between hazards (something that can cause harm) and risks (the probability of harm under real-world conditions). This is a crucial distinction AI governance still struggles with.
Codex doesn’t just issue rules—it creates an adaptive framework:
Countries can opt in to standards, often using Codex to justify national policies.
Standards are tiered by risk, not one-size-fits-all.
Systems are iteratively updated with new evidence.
Traceability and labeling are central to both consumer trust and regulatory enforcement.
This model works because it blends scientific rigor with political feasibility. It gives countries common ground while respecting local realities. And it provides neutral institutions (WHO, FAO) to host disputes and revise standards. Just as food safety uses risk tiers, traceability, and third-party audits to monitor threats, AI governance needs similarly layered, accountable systems to detect and respond to harm.
If AI governance had something like Codex, we’d be in a much better place.
Case Study 2: Climate Change and the Limits of Non-binding Norms
Like climate change, AI governance is a borderless challenge defined by shared risk but fragmented incentives. Both involve powerful actors with asymmetric influence, depend on voluntary cooperation, and suffer from weak enforcement mechanisms. And in both cases, the temptation to delay action—or to free ride on others’ restraint—runs deep.
Climate governance offers a cautionary tale.
The Paris Agreement (2015) set global temperature goals and encouraged countries to submit their own climate action plans. It’s participatory and flexible—but non-binding. That makes it easier to agree on, but hard to enforce.
The result? A framework that fosters dialogue but often lacks teeth. And yet, it created common language, norms, and metrics that have helped shape private-sector investment and national regulation.
Climate governance also gave us some strong implementation tools—like carbon markets, third-party verification, and the IPCC scientific process—that could inform AI. But it’s a reminder that voluntary cooperation, without accountability, can only get us so far.
Case Study 3: Nuclear Non-Proliferation and Supranational Enforcement
Like nuclear weapons, frontier AI systems pose profound risks—global in scope, hard to contain, and potentially destabilizing. Both demand international coordination, verification, and trust-building between actors who might otherwise be rivals. And in both cases, the tools of governance need to reach across borders without starting wars.
The Nuclear Non-Proliferation Treaty (NPT) is one of the most robust international governance models we have. Countries agreed to stop building new nuclear weapons, subject to international inspections by the International Atomic Energy Agency (IAEA).
What makes it work?
Verification powers: The IAEA can inspect nuclear sites—something few AI governance bodies can do today.
Supranational authority: States gave up some sovereignty in exchange for global stability.
Incentives: Countries that comply get support for peaceful nuclear energy.
Could we do something similar for AI? Possibly—but with major caveats. Unlike uranium, AI models can be trained anywhere, copied instantly, and deployed invisibly. Still, nuclear governance shows it’s possible to balance sovereignty and oversight when the stakes are high enough.
Case Study 4: Pandemic Response—Speed, Trust, and Politics
Pandemics, like AI, are global challenges shaped as much by information flow and institutional trust as by technical tools. Both demand proactive coordination, resilient systems, and the ability to act before harm escalates. And in both cases, once misinformation spreads or trust erodes, even the best tools may not be enough.
The Ebola response (2014–2016) was a rare success story. International coordination, rapid deployment, and strong public health messaging helped contain a potential catastrophe. It showed the power of investing in local capacity before disaster hits.
COVID-19, by contrast, revealed how fragile that system can be. Disinformation, political division, and uneven access to resources severely undermined the global response. But it wasn’t all failure—Operation Warp Speed showed what’s possible when governments remove bureaucratic friction and partner with industry to accelerate innovation. Effective vaccines were developed and distributed in record time.
For AI, this offers both a warning and a blueprint. Governance isn’t just about technical accuracy—it’s about building trust, investing early, and coordinating across borders. Because once public legitimacy breaks down, recovery is much harder than prevention.
Avoiding the Next Governance Crisis: What You Can Do
AI governance isn’t the first time humanity has tried to get ahead of a fast-moving, high-stakes challenge. But it might be the most complex—and the least forgiving. If we treat it like business as usual, we’ll replicate the same inequities, power imbalances, and brittle systems that have failed us before.
The case studies above show what’s possible when we get it right—and how dangerous it is when we don’t.
If you’re in government, civil society, philanthropy, or tech—and especially if you work on issues like health, education, climate, or human rights—AI governance is now part of your job. You don’t need to be a machine learning expert. But you do need to stay informed, ask better questions, and push for governance that reflects your values—not just what’s technically feasible.
📣 What You Can Do
Pay Attention
Did you know Congress tried to quietly slip a 10-year ban on state AI regulation into the federal budget bill? It didn’t pass—but the fact that it was even proposed should concern all of us. This is how power gets consolidated, how democratic input gets bypassed, and how decisions with massive implications get made behind closed doors. If we don’t speak up, someone else will write the rules for us.
Map Your Risks and Leverage
Where does AI intersect with your mission, your community, or your organization? What’s at stake—jobs, equity, autonomy, access? What values do you want to protect, and where can you influence how decisions get made?
Join the Conversation Early
Don’t wait for the frameworks to be finalized. Right now, governance is still fluid. This is the window to help shape the rules—before they’re written by and for the most powerful players.
Push for Inclusion
Who’s in the room where AI governance decisions are made—and who’s left out? Communities most impacted by surveillance, automation, and misinformation are rarely at the table. That needs to change.
This moment is wide open. It’s messy, fast-moving, and often opaque. But that also means there’s room to shape the outcome—if we act now.
Let’s not just react to governance failures after they happen. Let’s build systems that are proactive, participatory, and prepared.
Coming Next: Governing in Practice
In Part 3, we’ll move from theory to action. What mechanisms are actually on the table to govern AI? From model audits to licensing regimes, from red teaming to third-party oversight, we’ll break down the emerging policy toolkit—and ask: is any of it enough?
Because if we want AI to serve the public good, we have to design governance that moves at the speed of the technology—and centers the people who stand to be most affected.
Anthralytic helps mission-driven organizations navigate complexity, make better decisions, and shape more human-centered systems. We blend strategy, evaluation, and ethical AI to support long-term impact—grounded in clarity, rigor, and reflection.
If you're working at the intersection of social change and emerging tech, let’s talk: anthralytic.ai
https://www.fhi.ox.ac.uk/wp-content/uploads/GovAI-Agenda.pdf
https://www.europarl.europa.eu/topics/en/article/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence
https://www.un.org/digital-emerging-technologies/global-digital-compact
https://cset.georgetown.edu/publication/2024-annual-report/