Shall We Play A Game?
This will contain a major spoiler for the Bratpack movie WarGames but since it came out in 1983 then if you’ve not seen it yet it’s probably safe to assume you’re never going to.
In WarGames, Matthew Broderick and Ally Sheedy stave off the apocalypse when they persuade a military supercomputer called Joshua that nuclear war is futile, by having it learn the un-winnability of tic-tac-toe. The premise for WarGames seemed simplistic in the eighties (although the hacking element of the film did spur Ronald Reagan into issuing the first Presidential directive on cyber security) but in their 8-bit argument against the Cold War the writers predicted the ultimate challenge for humanity under an AI-controlled missile system. Could a LLM trained on the doctrine of mutually assured destruction learn to choose no war at all?
In the development of large language models a major stumbling block in having them do anything except predict the most likely next word (which is all ChatfuckingGPT is doing, please stop using it for the love of a giant robot god) is they couldn’t develop theory of mind. Some of you actually went to university so I won’t attempt to define theory of mind but for any fellow attendees at the drama school of life, the simplest way to put it is that artificial intelligence was stuck at the developmental age of your four-year-old son, if your four-year-old son had memorised the contents of Wikipedia and Reddit, and stolen millions of pictures of nude ladies off the Internet that he reconfigures to create “new” nude ladies whenever the shut-ins on X ask it to. You might want to have that child looked at.
To go beyond symbolic learning (readable symbols, logic, and rules) boffins would have to teach these ocean guzzling job thieves that the most predictable answer isn’t necessarily the correct answer; that the answer may be different depending on who asks the question; the answer may not be safe to give; there may be no correct answer… I could go on. I believe they’re getting there, using the coding equivalent of beating the motherboard with sticks each time it responds like an idiot who’s never been outside. But as will always be the case, an AI will never actually know anything; it can only ever perform an uncanny impression of knowing something. It has no knowledge, no judgement, no opinion, no understanding, no empathy; it’s just numbers generated from something it copied off a person then jiggled about with. Cool.
But back to our imminent white-hot deaths. Given the progress of such things, Joshua’s Gen Z grandson didn’t even bother to ask, “Shall we play a game?” The AI at NORAD is fully aware that, to quote Boy George, “War, war is stupid and people are stupid. And love means nothing in some strange quarters.” The question we might ask is, will it care? How would it care? It can’t care.
I’ll be an unapologetic bore about the dangers of normalising our use of unregulated generative AI and LLMs until everyone is shamed into abandoning them. And yes, I know it’s still the early days of a new technology and we’re supposed to imagine how useful these “tools” will become once perfected. But ultimately it doesn’t matter if they get better at not telling you the tallest bridges to throw yourself off (this is a real thing that happened) or if we all stop asking an ad-funded app, owned by a repulsive billionaire, for “help” with things human brains are far better and less annoying at doing (like turning off a fascist Clippy). Because, as my clever friend Simon told me, all that stuff is a sideshow. I mean, he didn’t say sideshow. I added that bit. But he did point out that the richest men on the planet (who are also, let’s never forget, the worst men on the planet) don’t need our money. They already have all the money. They don’t need our loyalty to their product because they can already buy whatever it is we choose to use, and ruin that too. Or not. They aren’t really interested in us and what we do or don’t like. The people who will ultimately decide what AI is used for are emotionally stunted, friendless, morons who understand life only as a racist computer game they intend to win by being the last dork alive in a cum-encrusted bunker on the moon. That’s what this has always been about for them.
When the US Secretary of Defence Robert McNamara began to apply MAD theory to the (falsely) perceived accumulation of nuclear weapons by the Soviet Union in the 1960s, he was doing so from a universally shared belief that the instant extinction of all living things was bad and something even the most cartoonish of despots would like to avoid, despite their posturing. No one foresaw a time when everyone in charge of everything would be a bunch of guys in ugly trainers, convinced their dad will finally love them if they make the earth explode.
“How about… Global Thermonuclear War?”
“Wouldn’t you prefer a good game of chess?”
“Joshua, you are so cringe lol. Draw me some boobs and blow everything up.”
