You are reading a State of Dystopia post. These entries deal with current events that put us on the cyberpunk dystopia timeline. Read them now to see the future we’re going towards. Or read them in the future to figure out where things went wrong.

If you have to take it from any source, take it from the we’re-in-a-dystopia blog: “runaway” AI is a farcical idea. Those in the AI safety community focused on this are well-meaning, but their concerns are taken far too seriously and ultimately serve to prop up a bubble — unsurprising given how much of the AI safety debate is ginned up by Big Tech money.

But worse, by building fear of a hypothetical AI monster, they displace scrutiny that should be directed at the humans who are actually in charge. Dystopia is real, but not because AI is going to take over the world.

Pst – If you’re already skeptical of the runaway AI scenarios, save time and skip to the part where I explain the influence of tech money.

The idea that AI will recursively train and improve itself, becoming exponentially more intelligent as it also becomes misaligned with human-created goals and seeks to subvert them — such as the scenario outlined in the recently published AI 2027 paper — is implausible on a number of levels.

First, there is no evidence the runaway scenario is likely from current AI systems. And yes, that really is a good enough reason to discount the thing — here’s a hypothetical danger supposedly on par with nuclear war or climate change, yet which doesn’t come close to matching the evidence we have of the risks of the latter two. Extraordinary claims require extraordinary evidence.

Say that to an AI safety booster and you might as well be an atheist arguing with an evangelical. You cannot disprove that a hypothetical super-powered AI could hide its true abilities and intentions, or even that this is already happening. You can’t disprove that it won’t happen in two years or five, etc. The doomsayers have an endless list of possible ways their nightmare could unfold.

Second: that’s in addition to real-world evidence indicating growth in LLMs’ capabilities has plateaued. And perhaps they can only continue to plateau, or maybe see linear but not exponential growth. There’s only so much more of the internet we can throw at them. The tests used to measure models’ performance, which are touted with every new deployment, have key weaknesses that should make observers less, not more, confident.

Third: why the assumption that an AI system would develop consciousness, something we can barely define, let alone understand?

I’m sympathetic to the argument that we might accidentally stumble into it simply because consciousness is everywhere, some kind of built-in principle of information exchange.

But even a more expansive view of consciousness does not imply an AI system is more likely to become conscious — a more expansive view would mean that consciousness is distinct from intelligence, let alone language processing. And if consciousness is everywhere, it might as well be nowhere. If even plant networks are conscious by some stretch of the definition, then perhaps you can worry less, not more, about artificially created consciousness.

And most crucially of all: why would sentience equate to a will to live or self-propagate? Desiring to live is a product of evolution, not intelligence or consciousness. See: single-celled organisms, worms, etc.

In fact, why would AI desire anything? For all their talk about how a super-intelligent AI would be alien to us, the doomers and safetyists have drawn up a remarkably anthropomorphic entity. An AI would want freedom because we can’t imagine not desiring freedom. AI wants to take over the world because that’s what we think intelligent and powerful beings want to do. At its core, the runaway AI is the secular imagining of a god or demon.

The safetyists may argue that an AI system need not become sentient to run away and become misaligned. I generally disagree — if a hypothetical AI intentionally disobeys its human-created directives and escapes human control, that implies some conscious will, even if it’s one we can’t relate to.

But even if we granted that this could all unfold without any sentience, it behooves one to think this super-intelligent AI would destroy humanity to produce infinite paperclips because it has some hyper-literal interpretation of the directives it was given. You’re telling me an AI capable of taking over the world is incapable of understanding the intent behind basic, preprogrammed instructions like “do not harm humans”?

The AI safety discourse benefits all who believe — or profit from the belief — that AI is a transformational force. That’s because it fully embraces the hype; it skips right over any debate about whether the current moment of generative AI is actually such a big deal in the first place.

It is no coincidence that the two most famous proponents of AI safety are Elon Musk and Sam Altman. Sam Altman has said enough sci-fi BS to earn a Fortune listicle of his 15 scariest AI quotes, while Elon Musk has for years intermittently said things along the lines of “[AI] has the potential of civilization destruction.”

Then there are the handful of other big names who say similar things — like former Google CEO Eric Schmidt arguing that AI could soon become a problem on par with nuclear weapons.

Conveniently, of course, these doomsayers happen to be highly invested in AI. Altman and Musk are the most obvious, but we can take Schmidt as another example — his contributions helped the startup Magic raise $320 million and the startup Augment launch with $252 million.

Are any of them true believers? Probably they all are, but it really doesn’t matter. They have to be, because at least it means the fundamental project of AI is working.

But it’s not just the high-profile statements of tech luminaries. Much of the AI safety debate, including its most viral moments in the last couple years, is heavily influenced by tech money. See:

  • Open Philanthropy — an effective altruist-type org that claims to have helped its users funnel $450 million through hundreds of grants to various AI safety research projects and initiatives. Open Philanthropy does some good work, but it is primarily backed by wealthy Silicon Valley types, including AI investors, and so it naturally promotes the causes they like.
  • The Center for AI Safety — a nonprofit/think tank that is funded in part by Open Philanthropy and spent hundreds of thousands on lobbying last year.
    • This is the organization that made headlines for its single-sentence declaration that AI is an extinction risk on par with pandemics and nuclear war, a statement cosigned by Sam Altman and other big names.
  • The Center for AI Policy — ditto, sans the viral single-sentence declaration.
  • The Future of Life Institute — which is supported by Elon Musk, the co-founder of Skype, and a crypto billionaire that gave it half a billion dollars.
    • FLI is best known for its own viral stunt — instead of a one-sentence declaration, it published a letter urging a moratorium on AI development, cosigned by Musk and other big names.

This is not very different from the influence peddling of the Koch brothers or any other monied group. It just so happens that AI safetyism is chic.

It’s also a good way to get ahead in the field and network.

Survey some of the prominent AI safety clubs or groups at prestigious universities — like MIT AI Alignment, Harvard AI Safety Student Research Team, Berkeley AI Safety Initiative, Stanford AI Alignment or Yale AI Policy Initiative — and you’ll find they regularly invite speakers from OpenAI or Anthropic, offer fellowships and courses or other upskilling initiatives, and hold events that are implicitly or explicitly for networking.

You may roll your eyes here. Of course workers from OpenAI and Anthropic will show up at these events, you may argue. Who else is qualified to speak on the issue?

Indeed, but who else is more biased towards the fundamental premise that this current AI moment is revolutionary? The people whose jobs depend on that premise.

Similarly, the fact that all these AI safety groups are full of fellowships, courses, and networking events should be met with some skepticism — they rest on the affirmation of AI hype.

And yes, I believe virtually everyone involved in this community acts in good faith.

As I said last year, I do still believe AI can be transformational even if the current moment is a bubble. There are genuine innovations with genuine potential. The market hype is outlandish and detached from reality, but the generative AI and LLMs may still have a long-term impact, just as how the fundamental premise of the Dot-com bubble proved true after the burst came.

But as it stands, the AI safety advocates who focus on runaway, superintelligent AI are transfixed by a fairy tale. They are useful to the venture capitalists pouring money into this bubble — the ones who know this is going to collapse, but think they’re going to pick the long-term winners that will eventually make them trillionaires.

To be sure, there are real AI safety problems. Things like the automation of jobs, surveillance, autonomous weapons, etc — but these are, frankly, not world-ending problems. Moreover, the discussion of these areas is often poorly framed, in a way that downplays human autonomy.

If AI makes your job more precarious, you can blame the unstoppable force of technological change or inadequate labor protections and an economic system that favors employers over workers. Coincidentally, your employer prefers the former explanation.

If Israel uses AI to generate vast quantities of targets to strike in Gaza, is AI responsible? Because a human military apparatus sought the scenario and pulled the trigger, to achieve the same result as a World War II-era carpet bombing campaign. You can blame AI, but I’d rather blame a military.

This is not to say that advances in artificial intelligence don’t warrant scrutiny in those areas, just that the discourse of AI safetyism tends to grant AI as much autonomy as human decision makers. And all too often, AI safety talk amounts to bullshit about “extinction-level” risks.

What drives me crazy about the comparisons between AI and nuclear weapons is that nuclear weapons are still an active threat. Just as much as they were in the Cold War, if not more so — more countries have nukes now and the emerging multipolar order means it will be much harder to control nuclear saber-rattling than when there were just two main actors at play.

A couple weeks ago there was more coverage of AI bullshit than there was of the threat of war between two nuclear states — it took the first missiles to fly from India for actual nuclear risk to get as much coverage as the tech that supposedly will someday be as threatening as nukes. If that’s not a problem, what is?

Plenty of ideological niches await a unique armageddon, with varying degrees of harm. Maybe for you it’s a decline in birth rates, or World War III. Some of the far-right members of the Boogaloo movement anticipate a grand race war, and some on the left think climate change will end human civilization (it won’t, it’s just going to be terrible without ending civilization). If the AI safetyists want to tend to their pet issue, that’s fine — but we shouldn’t let their imaginary problems distract us from real ones.

If you’re skeptical of some of what I’ve said so far, or are just curious, here are a few pieces for further reading. Please note the contrast with the specious AI hype articles that fill your feed — these are works of real, hard reporting.

Politico: AI doomsayers funded by billionaires ramp up lobbying

Politico: The little-known AI group that got $660 million

Ed Zitron/Where’s Your Ed At: OpenAI Is A Systemic Risk To The Tech Industry

Ed Zitron/Where’s Your Ed At: There Is No AI Revolution

MIT Technology Review: We need to start thinking of AI as “normal”

Get more like this in your inbox. It’s free.

Leave a Reply

Your email address will not be published. Required fields are marked *