AI Sovereignty Isn't Just a Government Problem — It's Your Next Board Question

Key Takeaways
AI sovereignty isn’t just about where your data lives. It’s about who controls the compute, whether your models understand your market, and whether you can operate independently when the landscape shifts.
Every major economy is building its own AI stack. The EU set the compliance benchmark. Japan is buying sovereignty with infrastructure. South Korea went hard law first. Singapore is building regional AI models. The rules are diverging, not converging.
My take: Sovereignty is an architecture decision you make upfront, not a compliance checkbox you tick later. If your AI depends entirely on foreign infrastructure, foreign models, and foreign talent — you don’t own your AI capability.
This Isn’t Theoretical Anymore
I spoke at the DEPA Thailand event at the end of March 2026 on “Accelerating Public Sector AI.” The question that came up most wasn’t about models or architecture. It was: “If our AI depends entirely on foreign infrastructure, foreign models, and foreign rules — do we actually own our AI capability?”
That question used to come from government agencies and banking CTOs. But now I hear it from telecom CDOs, and retail heads of AI. And it’s not just about where data lives. It’s about who controls the compute, whether your AI models understand your market, and whether you can operate independently if the geopolitical landscape shifts (as we recently see in the Strait of Hormuz dispute - the shift towards deglobalization is real)..
Jensen Huang put it well at the World Government Summit: the first wave of AI was about private sector innovation. The second wave is about nations building AI that reflects their own identity, culture, and language. That’s sovereign AI — and it’s happening now.
The numbers confirm what I’m seeing. Gartner predicts 65% of governments will introduce sovereignty requirements by 2028. (Gartner via Deloitte) Forrester says half of APAC enterprises will make sovereignty a top criterion for AI platforms this year. (Forrester via ComputerWeekly)
But here’s what the reports miss: sovereignty isn’t just about data residency. It’s about control across the entire AI stack.
Three Questions Your Board Should Be Asking
Do you (truly) control your AI infrastructure?
This goes beyond “which cloud region is my data in.” Sovereign AI means having access to the compute — the GPUs, the data centers, the training infrastructure — within your jurisdiction or under your control. Of course, in your first thought, you may want to build traditional on-premise data centers for this. But is it really the answer? How long is your procurement process? What will you do if NVIDIA releases newer versions of GPUs? What if quantum chips go mainstream?
Japan committed $6.3 billion over five years specifically for sovereign AI compute. South Korea is deploying 15,000 advanced GPUs by 2027 to reduce reliance on foreign ecosystems. These are national strategy.
For enterprises, the question is simpler but equally important: if your cloud provider decides to deprioritize your region, raise prices, or restrict access to certain AI models — what’s your fallback? If the answer is “we don’t have one,” that’s a sovereignty gap.
Do your AI models understand your market dialect?
Global models are trained primarily on English data. They don’t understand Thai regulatory language, Bahasa Melayu customer sentiment, or Japanese business honorifics. They produce outputs that are technically correct but culturally wrong and in regulated industries, culturally wrong can mean legally wrong. Moreover, in APJ regions, we have rich histories, different dialects, and different religions.
This is why Singapore built SEA-LION — an open-source LLM supporting 13 Southeast Asian languages. It’s why Japan is funding Sakana AI to build Japanese-tailored models for finance and defense. It’s why South Korea committed 530 billion won to five companies building sovereign foundation models. I’ve been involved firsthand in discussions about building SLM (Small Language Model) for Kam Mueang language.
The question for your board: are you building on models that were designed for your market, or are you hoping a Silicon Valley model will figure out your context?
Can you operate independently if the landscape shifts?
This is the hardest question. Vendor lock-in has always been a concern, but with AI it’s existential. Your models are trained on your data. Your agents are tuned to your workflows. Your guardrails are built for a specific platform.
But it’s not just vendor risk. Geopolitical shifts can change access overnight. Export controls, sanctions, policy changes — any of these can cut off your AI capability if it depends entirely on foreign infrastructure, foreign models, and foreign talent.
True sovereignty means you can keep operating. That requires local talent who can maintain and evolve your AI systems, portable architectures that aren’t locked to one vendor, and governance frameworks that are yours — not borrowed from another jurisdiction.

The Global Sovereignty Race
Every country is taking a different path — and that’s the challenge for enterprises operating across borders.
The EU set the benchmark. The AI Act’s high-risk compliance deadline hits August 2, 2026 — if your system can’t demonstrate conformity, you can’t place it on the EU market. Period. The EU’s 200 billion euro sovereignty vision includes 15 AI factories by 2026. What starts in the EU doesn’t stay in the EU — these standards become the baseline global enterprises adopt everywhere.
Japan is buying sovereignty with infrastructure, not legislation. $6.3B government commitment, $10B from Microsoft, and Sakana AI building Japanese-tailored LLMs for finance and defense. Soft regulation, massive investment.
South Korea went the opposite way — hard law first. The AI Basic Act (enacted January 2025, enforced January 22, 2026) mandates human oversight for high-impact AI and requires AI-generated content labeling. Plus 530 billion won funding five sovereign foundation model companies.
Singapore leads ASEAN with the world’s first Agentic AI Governance Framework (WEF, 22 January 2026) and SEA-LION — a 13-language open-source LLM built because global models don’t understand regional context.
Malaysia confirmed its AI Governance Bill, allocated RM2 billion for a sovereign AI cloud, and is building the region’s first AI safety institution.
The pattern is clear: every major economy is building its own AI stack. The question for enterprises operating across these markets isn’t whether sovereignty matters — it’s how to architect for a world where every country has different rules.
What I’d Tell My Board
If I were sitting in front of a board today, I’d say three things:
Audit your AI data flows this quarter. Map where and how your AI agents store data, process data, and make decisions. If you cannot answer these questions, that’s a risk you’re carrying without knowing it.
Build sovereignty into the architecture now. Data residency, operational controls, and portability — designed in from the start. Retrofitting sovereignty after deployment is expensive, disruptive, and sometimes impossible.
Don’t assume one framework covers all markets. If you operate across ASEAN, design your architecture to handle per-country requirements as configuration, not code changes. The regulatory landscape is diverging, not converging.
The Bottom Line
Sovereignty isn’t a compliance checkbox you tick once. It’s an architecture decision that determines whether your AI can survive the next regulatory shift, the next vendor change, or the next board question you didn’t see coming.
Know who controls your compute. Know whether your models understand your actual market. Know that you can operate independently if the landscape shifts.
Read more: Gartner via Deloitte, Forrester via ComputerWeekly, Hogan Lovells — Singapore, Google DeepMind — SEA-LION, WebProNews — Sakana AI Japan, Future of Privacy Forum — South Korea, TeckNexus — Korea Sovereign AI, AIinAsia — Malaysia, EU AI Act — Regulation 2024/1689, TechTarget — EU Vision