You are reading a State of Dystopia post. These entries deal with current events that put us on the cyberpunk dystopia timeline. Read them now to see the future we’re going towards. Or read them in the future to figure out where things went wrong.
I believe our current AI moment is akin to the dot-com era. I do not mean simply that it’s a bubble, because at this point everyone understands that. (Even the CEO of OpenAI said as much last summer.) Instead, this post is about what will come after the bubble bursts.
I specify the dot-com bubble because it actually lends some credence to the technology. The central premise of the dot-com era, the value of the internet and e-commerce, proved warranted in the end. The market just outpaced reality at the turn of the century.
Similarly: the frantic press given to AI, the corporate rush to magically fit AI into growth plans, and the enormous valuations given to a range of companies, all appear rather absurd. Especially when the profitability of resource-hungry large language models remains such an outstanding question.
But that does not mean the technology lacks promise farther in the future. I have been skeptical of AI hype for years, so the dot-com comparison is a very optimistic comparison for me.
Stanford’s Paul Saffo, who has argued that the current moment is a bubble, puts it well:
Folks always misunderstand bubbles. I am always having to remind people that just because something is a bubble does not mean that it is also quite real with profound potential impact.
— Saffo, interviewed by the San Francisco Examiner
So if this is a bubble, what will come next? Leaping ahead of me in a piece for Locus Magazine aptly titled What Kind of Bubble is AI?, author Cory Doctorow writes:
Tech bubbles come in two varieties: The ones that leave something behind, and the ones that leave nothing behind. […] When the dotcom bubble burst, it left a lot behind.
[…]
The most important residue after the bubble popped was the millions of young people who’d been lured into dropping out of university in order to take dotcom jobs where they got all-expenses paid crash courses in HTML, Perl, and Python.
Doctorow contrasts the dot-com bubble with that of crypto or NFTs, which he says left little behind. (I agree.) He doesn’t firmly identify the type of bubble he thinks AI will fall into, instead exploring arguments for either scenario.
My general suspicion is that this bubble is not going to be as productive on the labor side, per Doctorow’s characterization, as the dot-com crisis was.
I do not believe many workers are learning new skills for generative AI that can also be applied well elsewhere, or even reapplied in the future. How much long-term economic value is there in being really good at telling a machine what to create, when it’s assumed the machine will quickly evolve to require simpler instructions? Half the premise of the generative AI hype revolves around the lack of skill that is required.
If anything, the current generative AI bubble may result in a net reduction in the number of skills held by certain industry workers. The recent debut of OpenAI’s Sora, which translates text to video, stunned the internet. It’s not hard to imagine more refined, creative video production techniques being omitted in Hollywood internships and fellowships, with studios assuming that it’ll be cheaper to train new filmmakers and video editors to just generate the right shots. In this hypothetical, Sora would have a potential upside in allowing video editors to save time on mundane tasks — but it would also incentivize the industry to pursue increasingly bland content, and cut labor’s leverage.
There is some question, though, about how much current investment in computing infrastructure will pay off down the road. And not just the physical plants, but the class of engineers and designers that will be inculcated in how to create the next-generation chips, supervise complex AI models, and so on. While the number of those technologists pales in comparison to the number of people actually learning to use AI professionally, that aspect of the bubble may yet bear fruit.
Forbes contributor Mike O’Sullivan, who does not believe we are in a ‘proper’ AI bubble yet, cautions against being too rosy-eyed about bubbles:
[…] Asset bubbles usually destroy wealth, invariably transfer it from poorer to richer investors (the rich buy early and the poorer investors buy late it goes), they distort investment across economies and when they collapse, their aftermath can be costly (witness Japan’s lost decade(s)). Bubbles do often leave behind useful infrastructure – railways in the late 19th century, and the internet/telecoms infrastructure of the 2000’s, but at a high price.
But suppose we do end up with some useful infrastructure left behind. What would the negative aftermath of this current bubble look like, aside from the obvious part of day traders losing out?
The most relevant question to a typical person is whether any trend towards automating work will be reversed. For example, whether CNET, which apparently started using AI to produce articles around the time it laid off 10% of its staff, will end up rehiring for a significant portion of those jobs.
My own view is that AI — in its current and likely near-future iteration — does not automate jobs away so much as it makes them more precarious. Even CNET, which is the ostensibly clearest-cut case of automation so far, likely reflects a long struggle in online news that is unrelated to generative AI. But even if this assessment is correct, then it would be unlikely for certain white-collar jobs to claw back much ground, because the increasing prevalence of part-time and gig/contractor jobs showed no sign of abating even before the current generative AI hype cycle.
While this may be an over-simplification, I must admit that I think any plausible outcome will echo the longstanding story of our age: the consolidation of power in the hands of economic elites and large corporations.
After all, if your humble cyberpunk blogger can identify a bubble, it’d be silly to imagine that well-resourced investors can’t. Australian Financial Review columnist Adir Shiffman puts it like this:
The next hot thing can melt even the coldest funding winter.
The phenomenon is fuelled by a venture capital model that requires large risks and large losses to ensure exposure to the few disproportionately huge successes. They call it the “power law”, but high school maths students call it exponentially distributed values, where the top few winners deliver almost all the returns.
Power law FOMO means that each investor is ferociously fighting for access, to avoid missing the few biggest winners.
At the core of this cynical dystopian blogger is a disappointed utopian: someone romantic for the days when the left viewed automation as key to making the future better. It is almost tragic that so much discourse about AI is trapped by concern for its effect in the marketplace — is it a bubble, is it real, will jobs be lost, etc. Rather than being able to view the labor-saving potential of new technology as a good thing, we are forced to assess the tech as a threat — and somehow one that is necessary and inevitable.
And that is why to some extent, debate about generative AI is irrelevant for most of us. Maybe OpenAI and Anthropic will be household names in 5 years, just as how Amazon achieved new heights after the dot-com crash. Maybe the post-bubble landscape will see the emergence of the first true AI conglomerate, akin to the emergence of Facebook or Google. Or maybe the dot-com comparison gives too much credence, and generative AI is already near its functional peak (in economic terms).
But in all scenarios, Big Tech remains powerful and consolidated, and big investors retain their ownership of our economy. An AI crash wouldn’t stop that; even a larger tech crash probably wouldn’t stop that. The things that could actually stop that? Antitrust enforcement, financial regulation, and higher union density would help.
To return to the issue of the infrastructure that bubbles create, Big Tech’s investments into AI so far essentially involve investing in their own services. Not even yet-to-be-realized infrastructure, but already-existing infrastructure. As fellow internet pundit Ed Zitron writes:
[Last year] Semafor reported that Microsoft’s $10 billion investment was largely made up of credits for their Azure cloud computing platform. In essence, Microsoft “invested” $10 billion in money that OpenAI had to spend on Microsoft’s services […] When Google invested $2 billion in OpenAI competitor Anthropic, it did so in tranches […] Coincidentally, this funding round took place only a few months after Anthropic signed a multi-year deal with Google Cloud worth $3 billion, locking them into Google’s compute platform in the process. Amazon also invested $4 billion in Anthropic, who agreed to a “long-term commitment” to provide Amazon Web Services (Amazon’s competitor to Microsoft Azure and Google Cloud) with early access to their models — and Anthropic access to Amazon’s AI-focused chips.
Frankly, high-tech is not new anymore. Sounds obvious, but it means that even if AI has its moment in the sun, it’ll still be one pocket in a larger ecosystem that already has entrenched players. This is likely the biggest difference between now and the dot-com bubble, because the big tech companies then weren’t nearly as powerful as they are today.
I won’t bemoan our collective efforts to understand new technology, but one does wonder if it is a flashy distraction from the boring matter of our inequitable political economy. The rich will get richer whether this AI moment turns out to be a total dud, partial dud, or completely worth the hype. And valid as it may be to fret about the economic impact of AI, most of that fear truly concerns not the technology itself, but who owns it.
Get more like this in your inbox. It’s free.
Make a one-time donation, if you’d like. (I’ll still put out posts even if you don’t.)
Choose an amount
Or enter a custom amount
Your contribution is appreciated.
Donate