"In response..." đ Empire of AI by Karen Hao
A week ago I stumbled into yet another article to throw onto the Pile of Horrors surrounding the current generative AI hype cycle. Hardly surprising Iâm sure, especially if you talk to me regularlyâŚbut this one struck me as particularly damning because it gives us insight into how the club of elites â which for better or worse is currently âgoverningâ societyâs adoption of these Large Language Models (LLMs) â is thinking about our societyâs trajectory. Spoiler: the working class is not exactly heavily featured in their vision of the future. From the article featuring Karen Hao, the author of Empire of AI:
âWhatâs interesting is many of them choose not to have children because they donât think the world is going to be around much longer,â Hao says. âWith some people in more extreme parts of the community, their idea of Utopia is all humans eventually going away and being superseded by this superior intelligence. They see this as a natural force of evolution.â
âItâs like a very intense version of utilitarianism,â she adds. âYouâd maximise morality in the world if you created superior intelligences that are more moral than us, and then they inherited our Earth.â
Youâd be forgiven for thinking that this is a plot akin to a Final Fantasy game, which regularly features supervillains who ultimately reach the conclusion of âI need to become Godâ (if they donât happen to be God already). This desire proceeds to drive the plot forward as they typically attempt to wipe out entire races and usher in some sort of new world order.
SoâŚyeah, I guess thatâs what weâre talking about here. Seeing this quote was enough for me to modify our Sunday travels to include a trip to the bookstore, where I picked up a copy of Haoâs book.
Up to this point, my thoughts on the current state of AI and LLMs have largely come from a place of trying to understand my colleaguesâ mindsets, asking questions like:
- Why would you advocate for LLM-centric solutions that lead directly to your own job loss?
- Why would you choose a tool that is 1/1,000th as powerful as your own brain, and consumes millions (not an exaggeration) of times more energy?
- Given the choice, why would you choose to communicate with a chatbot vs. another human?
These are all questions that I still care deeply about, but reading Empire of AI has opened an entirely new front in my mind that I wasnât previously considering. Hao goes to great lengths to draw connections from the empires of history (admittedly never my best subject) to the big tech companies that have outsized influence on every aspect of our daily lives in 2025, ultimately concluding that these companies are just a modern form of such an empire. I wholeheartedly agree. From the book:
During the long era of European colonialism, empires seized and extracted resources that were not their own and exploited the labor of the people they subjugated to mine, cultivate, and refine those resources for the empiresâ enrichment. They projected racist, dehumanizing ideas of their own superiority and modernity to justify â and even entice the conquered into accepting â the invasion of sovereignty, the theft, and the subjugation. They justified their quest for power by the need to compete with other empiresâŚ
The more Iâve thought about companies like OpenAI, Microsoft, and Google this week through the imperial lens that this book provides, the more I see the parallels.
- âSeized and extracted resources that were not their ownââŚas in, the hijacking of our power grid and potable (yes, they prefer potable!) water supply
- âExploited the labor of the peopleââŚas in, the workers in Kenya who were contracted at under $2/hour to improve ChatGPTâs filters, so those of us who might benefit from such a tool would (mostly, but not always!) be spared from the psychological damage of seeing responses including âchild sexual abuse, bestiality, murder, suicide, torture, self harm, and incestâ
- âProjected racist, dehumanizing ideas of their own superiorityââŚas in, allowing culture wars to perpetuate by insufficiently addressing model biases (yes, even the shiny new GPT-5 is still plenty capable of using slursâŚand letâs not even talk about Grok right now)
You might wonder how the stewards of these companies justify such behavior to themselves: it all seems to stem from the concept of âeffective altruismâ, which Hao also does an excellent job of covering. Iâll simplify it down to a three-step thought process here:
- Instead of being a charity worker all my life, I can do more net good for the world by becoming rich and funding 1,000 charity workers in a few decades. (The statistical concept here is expected value, or
E[X].) - If AI grows too powerful in the wrong hands (reaching Artificial General Intelligence or AGI, roughly on par with a human brain), it could end the worldâŚand this outcome has the lowest
E[X]of all outcomes (because presumably weâre all dead in this world). - Therefore, it is morally imperative that I be the one to create AGI first, at all costs (costs no longer matter because anything is better than someone else achieving AGI first, especially not a country like ~gasp!~ ChinaâŚ)
Of course, one flaw is apparent right away from following this line of thinkingâŚwhat exactly makes OpenAI any more special or preferred to be the ones to achieve AGI before the rest of us? Why canât it be another organization, or yes, even China/another country?
In response, they might argue that itâs best for OpenAI to achieve this because theyâre a nonprofit with a clear mission of benefitting humanityâŚexcept wait, theyâre desperately trying to convert to a for-profit this year. Whoops!
Anthropic ($170bn) and OpenAI ($300bn+) are now in impossible situations, burning billions of dollars with no end in sight on toxic infrastructure-dependent business models. They cannot IPO. They are too big to be acquired. How does this end, exactly? www.wheresyoured.at/ai-is-a-money-trap/
â Ed Zitron (@edzitron.com) August 6, 2025 at 4:35 PM
[image or embed]
In short, Sam Altman may have had good intentions at the start, but the allure of money and empire corrupted him as it so often does others, and now heâs beginning to flounder as investor pressure mounts, now saying things like (paraphrasing) âwell, AGI was never that important in the first place, haha!â Tell that to the workers in Venezuela who helped train GPT-3 for you, or the employees that you first showered in equity and then attempted to claw back the equity from when they levied fair criticism against you.
Side note: we really cannot be afraid to criticize. It's one of the most important powers we have, and I've heard directly from friends in recent weeks that were afraid to do such a thing at their company. There is nothing that makes those in positions of power inherently more human or less imperfect than we are, whether it's corporate or government.
I would love for Empire of AI to become required reading for all ChatGPT users, but for now as a substitute Iâll do my best to promulgate these ideas whenever I can. I strongly recommend it, but if you want more of a taste from the book before picking it up, check out this excerpt from an early chapter.
The book ends on a positive note that I loved so much, I will echo it. For a while now Iâve been advocating for the use of smaller machine learning models; in my mind Iâve never been able to come to terms with how the mega-LLMs require an entire internetâs worth of data to write a high schoolerâs short story or research paper, or produce sloppy AI art. (These are wasteful solutions in search of a non-existent problem; in fact theyâre just creating more problems by taking away our critical thinking capabilities.) Why canât we continue designing thoughtful models that solve one ACTUAL societal problem really well?
Case in point: Te Hiku Media. Two Nvidia GPUs, 92% accuracy in transcribing the endangered te reo language. The community curated the dataset, so thereâs verifiably no abusive or harmful content in the training data and no need to subject content moderators to psychologically damaging content filtering of the outputs.
Big tech execs may think they have a divine âright to winâ, but they do not. We, collectively as a society, must decide who weâll lift up in response to their unreasonable actions and demands.