Last week, the European Union passed the AI Act, an EU-wide, landmark set of regulations on the usage of artificial intelligence. As you’d expect, it applies various levels of prohibition and permissioning for generative applications like ChatGPT. But it also applies to conventional tools, like facial recognition and, if you read it closely, even to things like plain-old linear regression. The impact is sweeping, and I suspect it will once more have a chilling effect on technology in Europe. This is dangerous for Europeans: in the long term, it may be yet another stone on a road to serfdom.
I grew up in Europe (mostly Germany, Denmark, Switzerland). I had never even set foot outside the continent until I was 18, when I moved to the United States. I have lived here for 12 years now, with most of that in San Francisco. This gives me some perspective on the disconnect between European attitudes and the creation of prosperity by technological advancement. In this essay, I will cover:
Why the AI Act is bad;
Providerism as the reason why Europe fails to build technology and create wealth;
What this means for Europe;
How to fix it.
(1) Why the AI Act is Bad
If you read a summary of the AI Act Briefing, one thing will stand out to you immediately: it’s all regulated at the application-level. The flavor of regulation is stuff like “you’re not allowed to use AI to exploit vulnerable groups,” “if you’re using AI for educational purposes, it has to pass this set of EU tests”, “you can’t use facial recognition for these forbidden purposes.”1 This style of regulating-by-patchwork-of-many-distinct-but-related-rules is infamously ineffective:
Because it lacks unifying/general principles, new regulations will need to be added for every new future case.2 There’s a new application? Let’s spend a thousand hours of lawmaker time on adding one more rule set.
The specificity of the rules ironically makes it unclear whether they apply or not, which invites adversarial lawyering: “is my application really using biometric data? No, it’s metadata…” or “does my application do forbidden social credit scoring? No, we’re really selling an actuarial risk assessment and you choose how to use it…”
On the other hand, the AI Act regulates techniques so basic that most technology companies will be technically out of compliance in some way from day one.
This is regulation at its worst: broad, sweeping, everyone’s technically out of compliance in some way, but you can pay to fight it. Obedient, rule-following entrepreneurs will waste all their time and money trying to comply with every paragraph of the AI Act. More Realpolitik-minded entrepreneurs will position their business such that it’s in a gray area, assume that if they’re successful, they’ll get sued one day, and save some funds for that eventuality.
And if they get sued, what are the penalties? Up to 7% of global annual turnover. This is a farce. If the EU’s contention is that this technology is so dangerous that it requires EU-wide regulation, then the penalties should actually be a lot higher. If there’s a bad actor running a massively abusive AI business, and the biggest threat they face is a 7% revenue penalty that they can probably knock down to 2% after a few years of litigation — then that’s no deterrent at all! These businesses run at 75% gross margins. You have one year that runs at 73%? Doesn’t matter.
This puts the AI Act in the dismal middle ground of regulation: annoying enough to dissuade legitimate entrepreneurs, toothless enough to not prevent large-scale abuse. I am shocked that the AI Act does not include any capability for banning something that would actually be dangerous, like a nation-state-funded, AI-optimized propaganda machine masquerading as a social network. If they can’t ban products, then it’s not consumer protection: it’s just wealth extraction.3
(2) Providerism
It is extremely telling that the AI Act’s regulations are all the application-level. The AI Act is drafted from the perspective of a consumer rather than a producer.
I do not endorse regulating AI technologies at this point in time. But for the sake of argument, if you wanted to regulate AI, I think you’d want to regulate somewhere at the production level, not at the consumption level.4 Why is it that the EU regulators are focusing entirely on the consumption level?
Well, because they are consumers. Europe is the continent of consumption. This is deeply ironic, because Europeans will thumb their noses at America and call it a consumerist society: runaway fast food obesity, endless billboard advertising, hapless folks drowning in credit card debt. But while America may be consumerist at the micro-level, it is highly productive at the macro-level: the US makes tons of great stuff. From medicine to fundamental scientific research to technology to space travel, we’re leading the charge. European individuals may not be consumerists, but Europe is a macro-consumer: virtually everything of value comes from elsewhere.5
I didn’t really get this until I moved to San Francisco. I had never in my life met people who make stuff. In Europe, my parents worked for non-profits. The parents of my friends were mostly middle managers, financiers, or professional service providers. Living in Silicon Valley is profoundly different, because the people you meet are working on building things that you use. It is hard to articulate just how colossal that difference in exposure is. In Europe, I used computers all day — but never gave any mind to where computers actually come from: you buy them at the store and that glosses over the abstraction. It feels like I was sleepwalking through economic life.
I call this Providerism: the ability to ignore political-economic reality because everything is provided for you, and the underlying mechanics and costs are abstracted away. Europeans may not be consumerists, but they are hardcore providerists. Growing up, virtually every consumer good I interacted with was made in Japan or made in China, and in 18 years that never gave rise to more than 15 minutes of conversation. The goods that you want appear from far away in your local store: they are provided. And if you fall on hard times and cannot afford these goods? The state will provide the basics. If you’re 22 and don’t know what to do, the state will provide more time and will provide a master’s degree. Or two. And if there is war, the state will call NATO, and NATO will provide defense.
Just before the final draft of the AI Act, Thierry Breton of the European Commission posted a picture of the team hard at work:
How fitting: the European Commission assembled around an iPad. On the back, in fine print, it will read: Designed by Apple in California. Assembled in China. There is no European iPad. There is no European computer. There is no European search engine. There are only European consumers, to whom things are quasi-magically provided, and so they regulate the providing and consumption of those things.
The European Union is so deep in Providerism that it does not recognize how far removed it is from the production of things of value. This myopia is a great peril for the citizens of Europe. Every year that passes, Europe slips deeper into complacency as goods and services are provided from abroad, while regulators are writing missives and assessing fines in impotent, play-acting gestures of agency. This encumbers European technological entrepreneurship by weakening domestic entrepreneurial network effects6 and setting ever higher local barriers to entry. This is terrible, because for Europe, the main way to maintain and achieve long-term prosperity is obviously to innovate technologically and produce things of value.
(3) The Writing on the Wall
Europe is falling behind. It largely missed the internet and personal computing booms, and now it sits in danger of missing the coming AI boom. Today’s Europeans are not yet poor — they are still living off the prosperity created by prior generations, and that enables their passive consumption7 — but tomorrow’s Europeans may be.
Further, European regulator gamesmanship can’t go on forever. Foreign firms don’t just have to comply with the AI Act, but also with GDPR and many other EU standards. At some point it’s all just too cumbersome and foreign firms will play hardball.8 In that situation, the EU will lose because they mostly don’t have domestic alternatives to major foreign software products, and EU consumers9 are not going to be willing to go back to the stone age.
This is a poor setup, both for European consumers and for the prosperity of Europe. The EU needs to course-correct swiftly and decisively.
(4) How to Fix It
Europe needs to escape Providerism, discourage complacent reliance on outside goods and services, and encourage the virtue inherent in making new things. This will require tremendous leadership. The task ahead is no small feat, and it will be unpopular. People like having things provided to them, but this state of affairs cannot last. From a public-budgets perspective, you already see Providerism breaking all over the EU. The great task ahead for European Regulators is to facilitate wealth creation, not wealth consumption. My recommendations:
Deregulate Technology. Throw it all out and start over. In the future, regulate big negative externalities, not imaginary potential ones. Do not presume need.
Federate. The EU needs to make it easy for businesses to expand across the entire EU market. Right, now there are many legal and financial barriers to doing so.
Foster a Production Mindset. It is important for Europe to repatriate some production of cutting-edge goods and services: not even for economic reasons, but just culturally. Focus on making things.
Lean into the AI Boom. The AI boom is a unique opportunity for Europe because it is tied to academic research, and the EU’s many universities are graduating great talent with no debt. Mistral is an amazing company: there should be more!
My favorite clause is on the prohibition of “AI systems that manipulate human behavior to circumvent their free will.” What a premise!
If you ever wonder how we got legal systems with thousands of byzantine and obstructive laws left over from the 1800s: this is how. One IF-statement at a time.
We see this extractionist mindset all the time: European politicians always drum on about supposed privacy invasions from Google and Facebook, then fine them for some totally unimpactful amount of money, and then they pipe down for six months before starting over again. If those politicians had real grievances, they would try to ban those products, or build local alternatives. They do neither. They’re just selectively enforcing regulation to extract what I view as a bribe to operate.
The argument for this is tangential to the main point, so I’ve relegated it to the footnotes:
Regulating at the consumption level doesn’t work for all the reasons from the first section. You wind up with a crazy patchwork of rules, and everything is maybe covered, maybe not. It’s so impractical that you could call it a jobs-creation-program for regulatory litigators.
The reason for regulation is because lawmakers are afraid of the sophistication of these models. That’s why the AI Act got drafted now and not 10 or 20 years ago. Machine learning models have existed for decades, but the results are just much better today.
Sophistication is mostly synonymous with scale. If you want to regulate sophisticated models, then you pick some scale thresholds: for example, you say that models trained with more than 10^26 petaflops or on more than 50 million datapoints are subject to registration/inspection. (From there, you can check that the model doesn’t violate existing law, e.g. anti-discrimination statutes.) That’s much simpler, cleaner, and less ambiguous than the current edge-case-circus of the AI Act.
The point on not violating existing law is significant: in many respects, AI does not create new opportunities for malfeasance, but just scales up existing ones. Those existing ones should already be covered by existing law! The set of truly new scenarios for regulation to address seems quite small to me.
You might disagree and point at European cars, German steelworks, British petroleum, and so forth. But these are the colossuses of yesteryear. They are fundamentally not dealing in new technologies. Most of them are in decline, losing market share, and we will see them disappear. More precisely, we will see them purchased and depleted by foreign private equity firms. This is already ongoing.
Here I’m just trying to make a basic protectionist point: the more a nation relies on imports and does not manufacture goods domestically, the harder it is for the nation to build up domestic manufacturing. Producing things has a geographic network effect to it. It’s much easier to get started when there are other folks around who are also producing things. New arrivals benefit from collective infrastructure and expertise.
I am not the first person to make this point. Writing this, I was reminded of Europe is the Free-Rider Continent from The Economist.
In practice, this might mean:
Blocking parts of their service for European customers, the way Mark Zuckerberg did in Canada. Threads rolling out in the US 6 months before in the EU is another example.
Ignoring EU regulation entirely
Officially stop servicing European customers, who will then have to connect via a US VPN, similar to how they might evade Netflix’s country blocks.
It would be funny if over-regulation of consumed goods puts Europe into a situation where they can no longer regulate (or consume) those goods.
Curiously, I don’t hear very much about the actual stated or revealed preferences of European consumers in the first place. My hunch is that the consumers do not actually make much use of the various consumer protections that the EU provides them: I click “accept cookies” on every cookie popup because the alternative flow (clicking decline and then various other buttons every time I load a webpage) is just too impractical. I cannot spend my life clicking through popups.
Do you think these regulation will have any impact for the large AI players, such as Microsoft? They do all their developments outside of the EU, and EU consumers can just use it over Azure, even if the datacentre is not in the EU.
On "how to fix it": the first step would be to stop the cancerous growth of the political class in Europe, who have no knowledge and believe their only reason to exist is to regulate and to give away other people's money.
Hence there is no fixing it unless there is a big breakdown. Some European nations may be able to do their own thing and fare a bit better. But where I am - Germany - it will get a lot worse before it gets better.
There is a very large missing gap in accountability between European citizens and their elected delegates. The absence of this feedback loop feeds the endless growth of the administrative state and all of the accompanying diseases of Kirchernism.
Very Important Article. I hope the Old World will one day give you reason to build here too.