If Anyone Builds It, Everyone Dies, Eliezer Yudkowsky
If Anyone Builds It, Everyone Dies, Eliezer Yudkowsky
List: $24.99 | Sale: $17.50
Club: $12.49

If Anyone Builds It, Everyone Dies
Why Superhuman AI Would Kill Us All

Author: Eliezer Yudkowsky, Nate Soares

Narrator: Rafe Beckley

Unabridged: 6 hr 18 min

Format: Digital Audiobook Download

Published: 09/16/2025


Synopsis

"May prove to be the most important book of our time.”—Tim Urban, Wait But Why

The scramble to create superhuman AI has put us on the path to extinction—but it’s not too late to change course, as two of the field’s earliest researchers explain in this clarion call for humanity.

In 2023, hundreds of AI luminaries signed an open letter warning that artificial intelligence poses a serious risk of human extinction. Since then, the AI race has only intensified. Companies and countries are rushing to build machines that will be smarter than any person. And the world is devastatingly unprepared for what would come next.
 
For decades, two signatories of that letter—Eliezer Yudkowsky and Nate Soares—have studied how smarter-than-human intelligences will think, behave, and pursue their objectives. Their research says that sufficiently smart AIs will develop goals of their own that put them in conflict with us—and that if it comes to conflict, an artificial superintelligence would crush us. The contest wouldn’t even be close.
 
How could a machine superintelligence wipe out our entire species? Why would it want to? Would it want anything at all? In this urgent book, Yudkowsky and Soares walk through the theory and the evidence, present one possible extinction scenario, and explain what it would take for humanity to survive. 
 
The world is racing to build something truly new under the sun. And if anyone builds it, everyone dies.

“The best no-nonsense, simple explanation of the AI risk problem I've ever read.”—Yishan Wong, Former CEO of Reddit

Reviews

Goodreads review by Josh on September 16, 2025

The title is not an exaggeration. This is a short and accessible explanation of the most important problem in the world: the threat of extinction from superintelligent AI. This book came out today. I started reading this morning and finished this afternoon. I was excited to read it, to say the least......more

Goodreads review by Alvin on September 19, 2025

I’ve been working in the AI space for over three decades and this was a difficult book to actually read all the way through. Not because the content was disturbing and scary, but rather because the arguments rely completely on fictional story telling. There clearly are real risks to the current unfe......more

Goodreads review by Bill Kuta on September 17, 2025

As soon as I saw that the nest had 91 pebbles… I knew it was wrong. Amazing book. Both the content and prose were on point. You won’t necessarily like what you’ll read (we’re probably doomed), but you’ll like reading it.......more

Goodreads review by Dave on June 28, 2025

This is an alarming book that I hope finds a wide readership about the dangers posed by ASI but I fear I may not have been the right audience. The author does an excellent job breaking things down in relatable way and doesn’t overwhelm with excessive nuisance. I’d recommend to anyone who isn’t conce......more

Goodreads review by Nikola on September 18, 2025

Multiple AI companies have the explicit goal of building an artificial intelligence that vastly surpasses human capabilities in all domains. Many experts think that the companies will achieve that goal within a decade. This is a huge deal. This book is an accessible introduction to AI safety, which i......more


Quotes

“The most important book of the decade. This captivating page-turner, from two of today’s clearest thinkers, reveals that the competition to build smarter-than-human machines isn’t an arms race but a suicide race, fueled by wishful thinking."—Max Tegmark, author of Life 3.0: Being Human in the Age of AI

If Anyone Builds It, Everyone Dies may prove to be the most important book of our time. Yudkowsky and Soares believe we are nowhere near ready to make the transition to superintelligence safely, leaving us on the fast track to extinction. Through the use of parables and crystal-clear explainers, they convey their reasoning, in an urgent plea for us to save ourselves while we still can.”—Tim Urban, cofounder, Wait But Why

“The most important book I’ve read for years: I want to bring it to every political and corporate leader in the world and stand over them until they’ve read it. Yudkowsky and Soares, who have studied AI and its possible trajectories for decades, sound a loud trumpet call to humanity to awaken us as we sleepwalk into disaster.”—Stephen Fry

“The best no-nonsense, simple explanation of the AI risk problem I've ever read.”—Yishan Wong, former CEO of Reddit

“Soares and Yudkowsky lay out, in plain and easy-to-follow terms, why our current path toward ever-more-powerful AIs is extremely dangerous.”—Emmett Shear, former interim CEO of OpenAI

“Everyone should read this book. There’s a 70% chance that you—yes, you reading this right now—will one day grudgingly admit that we all should have listened to Yudkowsky and Soares when we still had the chance."—Daniel Kokotajlo, AI Futures Project

"A compelling introduction to the world's most important topic. Artificial general intelligence could be just a few years away. This is one of the few books that takes the implications seriously, published right as the danger level begins to spike."—Scott Alexander, founder, Astral Codex Ten

“Claims about the risks of AI are often dismissed as advertising, but this book disproves it. Yudkowsky and Soares are not from the AI industry, and have been writing about these risks since before it existed in its present form. Read their disturbing book and tell us what they get wrong.”—Huw Price, Bertrand Russell Professor Emeritus, Trinity College, Cambridge

“This book offers brilliant insights into history’s most consequential standoff between technological utopia and dystopia, and shows how we can and should prevent superhuman AI from killing us all.  Yudkowsky and Soares’s memorable storytelling about past disaster precedents (e.g., the inventor of two environmental nightmares: tetra-ethyl-lead gasoline and Freon) highlights why top thinkers so often don't see the catastrophes they create.”—George Church, Founding Core Faculty & Lead, Synthetic Biology, Wyss Institute at Harvard University

“Silicon Valley calls it inevitable. Your survival instinct knows better. Humanity is funding its own delete key—an unblinking intelligence that never sleeps, never stops, perfectly indifferent. Wonder-time is over; this is our warning. Read today. Circulate tomorrow. Demand the guardrails. I’ll keep betting on humanity, but first we must wake up.”—R.P. Eddy, former director, White House, National Security Council