"In response..." 📖 Empire of AI by Karen Hao

The queen of kitties, Luna, looks on with her glowing green and yellow eyes as I read through Karen Hao's Empire of AI. I'm in the Gods and Demons chapter.

A week ago I stumbled into yet another article to throw onto the Pile of Horrors surrounding the current generative AI hype cycle. Hardly surprising I’m sure, especially if you talk to me regularly…but this one struck me as particularly damning because it gives us insight into how the club of elites — which for better or worse is currently “governing” society’s adoption of these Large Language Models (LLMs) — is thinking about our society’s trajectory. Spoiler: the working class is not exactly heavily featured in their vision of the future. From the article featuring Karen Hao, the author of Empire of AI:

“What’s interesting is many of them choose not to have children because they don’t think the world is going to be around much longer,” Hao says. “With some people in more extreme parts of the community, their idea of Utopia is all humans eventually going away and being superseded by this superior intelligence. They see this as a natural force of evolution.”

“It’s like a very intense version of utilitarianism,” she adds. “You’d maximise morality in the world if you created superior intelligences that are more moral than us, and then they inherited our Earth.”

You’d be forgiven for thinking that this is a plot akin to a Final Fantasy game, which regularly features supervillains who ultimately reach the conclusion of “I need to become God” (if they don’t happen to be God already). This desire proceeds to drive the plot forward as they typically attempt to wipe out entire races and usher in some sort of new world order.

So…yeah, I guess that’s what we’re talking about here. Seeing this quote was enough for me to modify our Sunday travels to include a trip to the bookstore, where I picked up a copy of Hao’s book.

Up to this point, my thoughts on the current state of AI and LLMs have largely come from a place of trying to understand my colleagues’ mindsets, asking questions like:

  • Why would you advocate for LLM-centric solutions that lead directly to your own job loss?
  • Why would you choose a tool that is 1/1,000th as powerful as your own brain, and consumes millions (not an exaggeration) of times more energy?
  • Given the choice, why would you choose to communicate with a chatbot vs. another human?

These are all questions that I still care deeply about, but reading Empire of AI has opened an entirely new front in my mind that I wasn’t previously considering. Hao goes to great lengths to draw connections from the empires of history (admittedly never my best subject) to the big tech companies that have outsized influence on every aspect of our daily lives in 2025, ultimately concluding that these companies are just a modern form of such an empire. I wholeheartedly agree. From the book:

During the long era of European colonialism, empires seized and extracted resources that were not their own and exploited the labor of the people they subjugated to mine, cultivate, and refine those resources for the empires’ enrichment. They projected racist, dehumanizing ideas of their own superiority and modernity to justify — and even entice the conquered into accepting — the invasion of sovereignty, the theft, and the subjugation. They justified their quest for power by the need to compete with other empires…

The more I’ve thought about companies like OpenAI, Microsoft, and Google this week through the imperial lens that this book provides, the more I see the parallels.

You might wonder how the stewards of these companies justify such behavior to themselves: it all seems to stem from the concept of “effective altruism”, which Hao also does an excellent job of covering. I’ll simplify it down to a three-step thought process here:

  1. Instead of being a charity worker all my life, I can do more net good for the world by becoming rich and funding 1,000 charity workers in a few decades. (The statistical concept here is expected value, or E[X].)
  2. If AI grows too powerful in the wrong hands (reaching Artificial General Intelligence or AGI, roughly on par with a human brain), it could end the world…and this outcome has the lowest E[X] of all outcomes (because presumably we’re all dead in this world).
  3. Therefore, it is morally imperative that I be the one to create AGI first, at all costs (costs no longer matter because anything is better than someone else achieving AGI first, especially not a country like ~gasp!~ China…)

Of course, one flaw is apparent right away from following this line of thinking…what exactly makes OpenAI any more special or preferred to be the ones to achieve AGI before the rest of us? Why can’t it be another organization, or yes, even China/another country?

In response, they might argue that it’s best for OpenAI to achieve this because they’re a nonprofit with a clear mission of benefitting humanity…except wait, they’re desperately trying to convert to a for-profit this year. Whoops!

Anthropic ($170bn) and OpenAI ($300bn+) are now in impossible situations, burning billions of dollars with no end in sight on toxic infrastructure-dependent business models. They cannot IPO. They are too big to be acquired. How does this end, exactly? www.wheresyoured.at/ai-is-a-money-trap/

[image or embed]

— Ed Zitron (@edzitron.com) August 6, 2025 at 4:35 PM

In short, Sam Altman may have had good intentions at the start, but the allure of money and empire corrupted him as it so often does others, and now he’s beginning to flounder as investor pressure mounts, now saying things like (paraphrasing) “well, AGI was never that important in the first place, haha!” Tell that to the workers in Venezuela who helped train GPT-3 for you, or the employees that you first showered in equity and then attempted to claw back the equity from when they levied fair criticism against you.

📝

Side note: we really cannot be afraid to criticize. It's one of the most important powers we have, and I've heard directly from friends in recent weeks that were afraid to do such a thing at their company. There is nothing that makes those in positions of power inherently more human or less imperfect than we are, whether it's corporate or government.

I would love for Empire of AI to become required reading for all ChatGPT users, but for now as a substitute I’ll do my best to promulgate these ideas whenever I can. I strongly recommend it, but if you want more of a taste from the book before picking it up, check out this excerpt from an early chapter.

The book ends on a positive note that I loved so much, I will echo it. For a while now I’ve been advocating for the use of smaller machine learning models; in my mind I’ve never been able to come to terms with how the mega-LLMs require an entire internet’s worth of data to write a high schooler’s short story or research paper, or produce sloppy AI art. (These are wasteful solutions in search of a non-existent problem; in fact they’re just creating more problems by taking away our critical thinking capabilities.) Why can’t we continue designing thoughtful models that solve one ACTUAL societal problem really well?

Case in point: Te Hiku Media. Two Nvidia GPUs, 92% accuracy in transcribing the endangered te reo language. The community curated the dataset, so there’s verifiably no abusive or harmful content in the training data and no need to subject content moderators to psychologically damaging content filtering of the outputs.

Big tech execs may think they have a divine “right to win”, but they do not. We, collectively as a society, must decide who we’ll lift up in response to their unreasonable actions and demands.