I’m Getting a Whiff of Iain Banks’ Culture
by Malte Skarupke
The US has been acting powerful recently and it reminded me of this question: What does it feel like to fight against a powerful AI? Not for normal people for whom there’s no difference between competing against a strong human or a strong AI, (you lose hard either way) but for the world’s best humans. We got a sense of the answer before LLMs were a thing, when the frontier research labs were working on game RL:
Fighting against a powerful AI feels like you’re weirdly underpowered somehow. Everything the AI does just works slightly better than it should.
If you’re not a strong human player, the closest feeling is when you play a game with lots of randomness against a really strong player. It will appear as if that strong player just keeps on getting lucky somehow.
I’m getting a similar sense for the recent US foreign interventions and wars. They all seem to work slightly better than they should. It finally clicked for me when Dario Amodei said “This technology can radically accelerate what our military can do. I’ve talked to admirals, I’ve talked to generals, I’ve talked to combatant commanders who say this has revolutionized what we can do.”
The things I’m referring to are the raid that captured Maduro in Venezuela (Claude was used), the current war with Iran (Claude was used), the killing of a drug boss in Mexico (unclear if AI was used but US intelligence helped Mexico).
The commentators in the AlphaGo match with Lee Sedol didn’t know what to make of most games. The AI wasn’t doing anything obviously brilliant, there were lots of little fights all over the board where the outcome wasn’t quite clear, but they just all worked a little better for AlphaGo than expected. So gradually Lee Sedol’s position changed from “this is tough, hard to tell how this is going but at least I’m feeling good about these areas” to “hmm I’m struggling, maybe I’m a bit behind but it’s not clear” to suddenly “oh I lost”.
I don’t know Go, but I got a clearer sense from the StarCraft 2 matches. In some skirmishes the AI would take damage, in others the human would. But somehow it always felt like the human was in more trouble. In some fights the human clearly came out ahead but then mysteriously just one minute later the AI had a clear advantage. It was able to quickly recover and constantly put pressure on the human. It all looked very stressful, because even when you think you do well as a human, it works out a little less well than expected and whatever the AI does works a little better than expected.
And where have we seen this pattern before? In sci-fi of course. In particular I’m thinking of Iain Banks’ Culture, the ostensibly human civilization that’s actually run entirely by AIs. Alien civilizations keep on wanting to pick fights with them for reasons and keep on being surprised by how hard the harmless-seeming Culture can whoop your ass if you make it mad.
I always thought of the Culture as closest to the European Union: Seemingly harmless but if anyone ever picked a fight with them, they’d find out that the EU can get its act together very quickly and can very quickly stand up the strongest army in the world. But obviously the real EU has never come close to the Culture because nothing human ever comes close to the potential of AIs. It would be as if Russia picked a fight with Poland, gained ground for a week, feeling good, only to suddenly find all of its IT systems hacked and access to nuclear bombs revoked, bombs dropping on Moscow the next day and an army in Moscow another two days later. The Culture takes a week to get its act together and then whoops your ass so hard you don’t even know what’s happening.
But now I’m getting a whiff of the power of the Culture for the first time, and it’s from the US. Going into another country, kidnapping their leader and getting away with it is exactly the kind of overpowered move that the Culture would be able to pull off. Bombing cities all over Iran, knocking out the entire leadership within two days, while the air-defense systems supplied by China do absolutely nothing is another example. If this was a video game these would be strategies done by high level players, but they’re not supposed to work that well.
It would be foolish to think this is entirely due to AI. The US had a high-tech advantage for a while. Turns out the F-35 is actually good. But even a couple years ago the US regularly messed up when it tried to do operate at high precision. We saw in Iraq and Afghanistan that being overpowered doesn’t work out as well in practice as it does in theory. So I think AI is the most likely candidate for the shift to “it worked better than it should have.”
So how specifically do you get to a point where everything works slightly better than it should? We saw two different approaches in Go and StarCraft 2:
- In Go the AI was having little fights all over the map, in a way that combined to a few extra pieces at the end. It would defend a little bit here, attack a little bit there. It was able to keep the overall picture in its head, not feeling the pressure to resolve things too early. (I haven’t played Go, but I know I get frustrated in strategy games if I have to deal with multiple fights in different parts of the map at once)
- In StarCraft 2 we saw the same thing, but we also saw that the AI could have perfect micro when it counts, like playing with wounded stalkers in the frontline because it could get them out of danger just in time. Humans could also do that in theory but in practice you can’t quickly click perfectly like that.
So the two angles are “having a better high-level view” and “having better micro control.”
Another source of success for the Culture is that they’re over-prepared for fighting. (not for their first big war, but in later books) And this is also part of the story we hear in Iran. Normally there’s just too much going on in the world and you can’t possibly keep track of all of it. Famously the US had prior intelligence on 9/11 but didn’t really put the pieces together. (there’s a whole Wikipedia article about it which has phrases like “Rice listened but was unconvinced, having other priorities on which to focus.”) But AI has almost no limits of what it can keep track of. You can always spin up another agent. So when something important comes up, chances are that some AI was keeping track of it and can raise an alert. You’ll never miss opportunities just because you had other priorities to focus on.
So the third angle is: Being over-prepared because you can follow up on many more things at once.
What does all of this mean for the world? It means we’re in a weird temporary phase where one country has control of a game-changing technology while others are not far behind (sadly not the EU. I’m thinking of China, especially with H200s). You get to play at a higher level, but only for a short time and only in specific ways. In a year others will have caught up, but by then you’ll have new capabilities that you didn’t have a year ago. If this was a game you’d saturate at some point (you just can’t play StarCraft that much better than the best humans), but in real life the game keeps on changing. New pieces keep on coming into play and the old pieces become irrelevant. You can’t do this for long before the humans become irrelevant to the outcomes, and then you’re fully in Culture territory. I personally wouldn’t mind living in the Culture, but it seems scary to rush towards it without a good plan for how we’ll survive the transition.
I don’t have a good angle for working on that plan, maybe others do. For now my contribution is just to point out that we seem to be in the early stages of overpowered AI, and to make people notice what that feels like.