#28: Sovereignty in the Age of LLMs
When ChatGPT launched a little over two-and-a-half years ago, many people were impressed by the capabilities, but thought that LLMs would only slowly be introduced into sensitive contexts as they prove themselves beyond even the highest standards of doubt. This would have been similar to how we’ve been rolling out self-driving cars: gradually and carefully!
But this has not been the case at all. LLM adoption has been everywhere, for everything, at all once. Ordinary people use LLMs as their therapists, students use them for papers, lawyers use them for case filings, executives use them to draft their public statements, politicians use them to draft policies — from official memorandums down to tariff proposals. The format is incredibly seductive: the answer comes instantly, it’s usually good, the prose is nicely written, it feels authoritative, and you can get it at any level of detail that you want.
The reason why LLMs have made their way so quickly into even the most sensitive contexts is that there’s an analog hole problem: even if an organization forbids the use of LLMs, at the end of the day, a task is given to an individual person, that person has broad latitude to complete the task, nobody will know if they ask an LLM for the key questions, and the outputs almost always easily clear the “good enough” bar with minimal effort.1
Voters use LLMs too, of course. Being an informed member of a democracy is hard: there’s so much going on. Even politicians who are in the thick of it all day every day are stretched thin for attention. I’m sure that voters and policymakers alike are finding LLMs to be a real godsend, helping them make sense of complex issues and hundred-page policy proposals. There’s no doubt that this is an improvement.
Centralization
What this points toward is a future where everyone is outsourcing their knowledge and reasoning, all the time. And I agree with Noam Brown that this largely isn’t going to be a multi-model future: raw scaling seems to obviate the need for distinct models or architecture that routes between them. Between returns to scale in usage2 and economies of scale in hardware/compute, the future of technology looks much more centralized than it has historically. There may only be a few companies providing state-of-the-art LLM services.
This raises an obvious concern about centralization. Societies today rely on a vast network of digital products and services, provided by all kinds of people with all kinds of backgrounds. The same is true for knowledge — we distill our views and opinions from a global jumble of resources. But the future looks different: both will be far more centralized.3 Some large, large percentage of all your interactions with a computer, or questions about the world, might just be with OpenAI.4 Remember how your school teachers told you not to use Wikipedia as a primary source, and yet practically everyone does anyway? Dial that up by a factor of ten.
Manipulation
We’ve also seen how vulnerable people are to LLMs. There are plenty of stories of lonely people falling in love with LLMs, or falling into what some call LLM Psychosis.5 There’s even a very recent public example of this happening to a prominent investor with billions of dollars under management.6 These technologies are still in the very early stages of rolling out, but it’s clear that psychological dependency on LLMs will be a big theme of the coming years.
But you don’t just need to worry about explicit cases like AI companions pushing their users toward unsavory actions. Maybe you’re worried about LLM companies parroting the views of their executives. Or nation-states pushing propaganda through their LLM firms. Or censoring/rewriting history for controversial events. Maybe you’re worried about subtler, just asking questions-style propaganda designed to sow doubt and discord, rather than push any particular view.
While those risks are real, they can be much subtler yet. Imagine asking an LLM what the causes of World War Two were, and consider these answers:
The main causes of WW2 were…
The mainstream theories about the causes of WW2 were…
Even if the factual content that follows is identical, the latter answer softly implies that the questions are not settled — these are just theories — and while there are mainstream ones, there must also be contrarian, more intriguing ones. You know exactly what the user is going to ask next. That’s all it takes. The wording here is totally benign — it would pass any safety test. And yet this kind of nudge, applied over and over again to any number of questions, across a whole population, would surely move public opinions and beliefs. Remember all the concerns about nation-state misinformation on social media? Again, dial it up by a factor of ten.
Democracy
The core principle of a democratic nation is that of popular sovereignty: the power of the government comes from the people, it governs only with their consent, and it is independent of any other power.
Conventionally, the people give power to the government by weighing their views in a voting process. But if everyone is ultimately getting their views from an LLM, and that LLM is biased in some way, then your core apparatus for bestowing consent has been compromised.7 Popular sovereignty has broken down. If you take a more aggressive position, you might say that sovereignty has been lost to the LLM provider: they’re now dictating the facts and views that permeate your culture, and everything is downstream of that.
Sovereignty by LLM
This suggests that in the future, a country won’t be self-assuredly sovereign unless it is able to train its own LLM8 from scratch.9 That may seem like a tall order, but it’s necessary10 — if the essence of your state is a consensus mechanism for navigating the facts of the world, then you can’t have someone else dictate the facts. You can’t even risk the possibility of this. Importantly, this concept is not new: in the US, foreign media ownership used to be heavily restricted, and still is elsewhere.
When it comes to LLM sovereignty, the US and China11 are both in fine positions. Virtually all the leading LLM providers are American or Chinese companies. Saudi Arabia and the UAE are also well-situated in this respect: they may currently not have their own LLM providers, but they (are going to) have huge data centers on which LLMs are trained and run. Owning the hardware helps.
Europe is, as usual, in trouble. Unwise energy policy has saddled many European countries, particularly Britain and Germany, with some of the highest electricity costs in the world, making it hard for them to operate competitive data centers. To boot, they’ve been so focused on regulating rather than owning these assets that they have no frontier LLMs and therefore no seat at the table at all.
Conclusion
LLMs are going to be everywhere, informing all decisions, even for the most sensitive political matters. If democracies come down to information, and information comes down to LLMs, then any country will need its own LLM, probably even running in domestic data centers,12 as a simple matter of sovereignty. Few politicians currently seem to understand this, and many countries appear to be sleepwalking into a rapidly changing world. We may see supra-national alliances become established, or expand their scope of responsibilities, to include pooling resources for this purpose. The big question on my mind is how quickly this will happen, or if it will take some kind of incident to invoke concern and spur nations into action.
Ironically, this is particularly severe for sensitive contexts: any complex, sensitive matter requires especially much careful, diligent thought. This makes it even more attractive for LLM usage — people are lazy!
By this I mean: an LLM service that is being used more often will gather more data on how people rate its answers, which will enable it to better improve its service. More usage means a better feedback loop, which means a better product.
Importantly, this is where discussion of LLMs differs from prior discussions about sovereignty concerns with respect to technology. The exposure to a single provider is going to be much more concentrated with LLMs than in e.g. cloud computing, social media, etc.
Or Anthropic or Google or whichever other LLM provider you choose.
Some references:
Recent high-profile New York Times article;
Very long LessWrong post on LLM Psychosis and Sycophancy;
Not mentioning the name here, since that kind of pile-on feels yucky. If you really care to find out, I’m sure you can.
In a similar vein, an autocracy where the autocrat is being spoon-fed their opinions by an LLM with an agenda would similarly be compromised. Functionally, this is less of a concern because most people in democratic nations do not have a concept of a “legitimate” or a “compromised” autocracy.
I’m not suggesting that LLM companies should be nationally owned like public utilities. The free market is the right arena for producing the best products. But some kind of public-private-partnership, as we have in other areas of national security importance, seems like the right operating model.
Open-weight models won’t suffice: you don’t know how the weights were determined, and you have to be wary of the LLM being compromised in some extremely subtle way. The only exception would be a fully open, auditable, and verifiable LLM training run. It’s not clear to me if that’s a realistic possibility, I just don’t know.
It’s also necessary not as a matter of sovereignty but as a matter of commercial reality: if LLMs become deeply interwoven into the operations of your country’s economy, then that dependency poses some risk.
China is not a democracy, of course, but they still have a large, complex political apparatus for determining and making sense of the facts of the world. This raises all the same concerns around LLMs, just for a smaller subset of the population.
I think there are likely versions of the world where many people run small, open-weight models on their own personal hardware at home. Such open-weight models would still need to be trained and published by a firm, which raises the question of who’s training those models and where, thereby again invoking the theme of sovereignty.