Yuval Noah Harari's core claim is simple and devastating. Human civilization is built on shared fictions. Money, nations, corporations, human rights — none of these exist in nature. They exist because billions of people agree to act as if they do. Pull that thread, and everything unravels. And everything becomes explicable.
“Sapiens rule the world because only they can weave an intersubjective web of meaning.”
— Yuval Noah Harari, Sapiens: A Brief History of Humankind, 2011
Why They Belong Here
Harari doesn't offer comfort. He offers clarity — and the two are rarely the same thing.
Harari argues that *Homo sapiens* dominates Earth not because of superior intelligence or tools, but because of one unique capacity: believing in things that don't physically exist. Myths, gods, money, nations — these shared fictions are the most powerful force in human history.
Every institution humans live inside — legal systems, currencies, corporations, human rights — is an inter-subjective reality. It exists only because enough people simultaneously act as if it does. Harari didn't invent this insight, but he made it impossible for millions of readers to unsee.
The Agricultural Revolution looked like a triumph. Harari calls it history's biggest trap. The average farmer worked harder, ate worse, and died younger than the foragers who preceded them. Wheat expanded. Human welfare didn't. The lesson: civilizational scale and lived experience point in opposite directions.
Harari's most alarming forecast: algorithms will soon know humans better than humans know themselves. The ideology he calls Dataism treats information processing as the highest value. When that logic runs to completion, liberal humanism — the story that individual consciousness matters most — may become functionally obsolete.
Something changed in human cognition roughly 70,000 years ago. Harari dates the emergence of complex language and fictional thinking to this period. It wasn't just communication that evolved — it was the capacity to coordinate millions of strangers around beliefs no one can touch or see.
The same shared stories that built civilization are now being stress-tested by forces that don't believe in them. AI has no stake in democracy. Algorithms don't subscribe to human dignity. Harari's urgent question: what happens to fictions when the systems we built stop needing them?
Timeline
From Oxford doctoral candidate to the intellectual that a generation chose to explain itself to itself.
Harari completed his DPhil under Steven Gunn, studying Renaissance soldiers. The work was archival and specialist — the opposite of everything that followed. It established his discipline before he abandoned its scale.
Initially released in Israel, the book began as lecture notes from a world history course at Hebrew University of Jerusalem. The course forced Harari to ask questions his specialty never demanded. The answers filled 400 pages.
The international English edition triggered a cultural phenomenon. Barack Obama, Bill Gates, and Mark Zuckerberg all named it publicly. Within years it had sold tens of millions of copies. Academic historians sharpened their knives.
The follow-up looked forward rather than back. Harari argued that Dataism — the belief that information processing is the universe's highest value — could displace humanism entirely. The concept of "the useless class" entered public debate and stayed there.
A third book addressed the present directly: nuclear war, algorithmic manipulation, the collapse of liberal democracy's narrative, and the question of what to teach children for a future no one can predict. Critics noted the problem statements outpaced any offered solutions.
Harari became one of the most prominent public voices calling for AI regulation. He signed open letters, addressed global forums, and published Nexus, arguing that AI represents a qualitatively new kind of threat — not because it will become conscious, but because it doesn't need to be.
Our Editorial Position
Harari operates at the intersection this platform was built for. He asks what humans actually are — not in a clinical sense, but in the oldest sense. What is the self? What is real? What stories hold a civilization together, and what happens when they stop working? These are not policy questions. They are metaphysical ones.
His critics are often right on the details. He generalizes. He smooths. He sometimes mistakes a compelling frame for a proven mechanism. But the questions he insists on asking — about collective consciousness, the fragility of shared meaning, and the futures we are building without permission — belong in any serious conversation about what it means to be human right now.
Esoteric.Love features thinkers who force a reckoning. Harari forces one about the stories that run our world. If those stories are fictions, as he argues, then the question of which fictions to choose — and which to let die — is the most important question alive.
The Questions That Remain
If all civilizations run on shared fictions, what makes one fiction worth defending over another? Harari describes the mechanism. He rarely adjudicates the value.
He argues that AI and biotechnology could produce a class of humans that are economically and cognitively obsolete. But obsolete by whose measure? The question of who sets the metric — and who benefits from that framing — doesn't always get the scrutiny it deserves.
The Cognitive Revolution gave us the capacity for myth. The Agricultural Revolution gave us hierarchy. The Digital Revolution is giving us something we don't yet have a name for. Harari's deepest implication may be the one he states least directly: we have never been in control of the stories that control us. What would it look like to change that?