10 minute read

If you’ve ever wondered why humans dominate the planet (or why we might one day lose that dominance) Yuval Noah Harari’s Nexus offers a sobering answer: information systems have always shaped the way we cooperate, govern, and innovate.

From cave paintings to artificial intelligence, Harari traces how these systems evolved and what that means for our future. If you are interested in the insights I’ve collected from the book, let’s grab a cup of coffee and dive into it.

image
Photo by Alina Grubnyak on Unsplash

Quick summary for those in a hurry

Harari provides an overview of information systems from the Stone Age to artificial intelligence.

He explains how these systems have historically helped us to collaborate in increasingly larger groups. Alongside the positive effects, there also were downsides: for example, while the invention of the printing press was the main driver of the scientific revolution, it also created a platform for spreading the misinformation that led to witch hunts.

Harari also examines information systems in the context of different forms of government. They play a role in both totalitarian regimes (centralization of information) and democracies (information exchange).

AI, however, is different.

It no longer requires humans to process information. Harari explores the specific impacts of these developments - on democracies, superpowers, smaller governments, and totalitarian states.

Longer summary for those who want to read more

What is information?

Harari attempts to define “information” and notes that it is a controversial discussion how to actually define it. Information has different meanings across various disciplines (physics, philosophy, etc.). For the purpose of his book, he adopts a historical definition. Information is seen as any structured data or message that influences human behavior and social organization.

Information is not always truth because it can also be subjective. It is highly dependent on context.

Stories: unlimited connections

The uniqueness of humans lies in our ability to collaborate flexibly and effectively in large groups (> 50 individuals).

One reason we can work effectively in such large groups is a form of communication: stories. Even though we can only truly know a few hundred people, stories about individuals (e.g., Jesus, Stalin), events, or products (Coca-Cola) have a much wider reach.

Through these “shared” stories, we feel connected to a large group (e.g., the Catholic Church, the state of China).

Stories can be true, but they can also serve a purpose. While they may not be objectively true (or even disproven), they help maintain order.

Truth ≠ Order.

Or in the words of George Orwell: Ignorance is strength.

Or at least it can be. This is the dark side of information systems that is used especially by totalitarian regimes to centralize (false) information to steer the public in a desired direction.

Documents: the bite of the paper tigers

Documents are the first evolution of stories. They are the invention that allowed humans to truly conquer the world.

Books (handwritten and later printed) always contain the same content, which led to a democratization of stories.

In the past, stories had changed constantly through word of mouth because details were added or altered. Books changed that.

Books are always the same. So, the information is objective.

But is it really?

Looking at religious texts, it becomes clear that written information always leaves room for interpretation.

Harari discusses the canonization of the Bible and the Old Testament. Because much room for interpretation remained, rabbis created the Mishnah to clarify biblical rules. But even that left room for interpretation, leading to the Talmud.

For example, the Bible says: “Do not boil a young goat in its mother’s milk.” Depending on orthodoxy, this is interpreted differently - from the literal wording to strict dietary laws forbidding any mixing of blood and milk.

Similarly, the rule that no work should be done on the Sabbath is interpreted to include operating electrical circuits. Orthodox Jews do not use elevators on the Sabbath because they would have to press a button (though special elevators exist to solve this).

Errors: the fantasy of infallibility

Information can also be wrong (and this is nothing new).

It can even lead to closed information spheres built entirely on falsehoods. When such spheres contain a lot of information, they appear credible simply because of their volume.

An example is the witch hunts up to the early 17th century.

Conspiracy theorists often pride themselves on being critical thinkers by questioning the status quo. But when it comes to questioning their own theories, they quickly fall into confirmation bias.

Why?

They lack self-criticism or self-correction - a central element of the scientific method.

Decisions: a brief history of democracy and totalitarianism

Harari explores different forms of government. Totalitarian rulers centralize all information. Democracies, on the other hand, are structured differently: information is more decentralized. Decisions come from the majority but must never discriminate against a minority.

Harari explains that democracy is, however, a spectrum: from modern democracy on one end to autocracy on the other.

In ancient Greece, democracies were limited (only wealthy, free men). Later, monarchies emerged because a key element of democracy (simple communication with the people) disappeared as city-states grew into empires.

Communication required abstraction.

Totalitarian rule was also difficult without the technological capabilities of the 20th century. Autocrats like Nero could bend laws to their will, but they lacked an information network like Stalin’s to control every aspect of their subjects’ lives.

The new members: how computers are different from printing presses

In the past, technologies like the printing press or radio helped distribute information faster. But humans always created and curated the content.

That changes in the age of computers and algorithms.

Content is created, edited, and distributed by machines without constant human intervention. Will this lead to a “Silicon Curtain”?

Intelligence ≠ Consciousness.

The fact (that intelligence and consciousness are different) is often overlooked.

Systems can be intelligent without being conscious. Algorithms (e.g., Facebook, Instagram) or today’s generative AI are undoubtedly intelligent (making decisions to achieve goals) but lack consciousness (awareness of themselves).

Relentless: the network is always on

In information networks, humans were always the only “intelligent” link in the flow.

Computer technology changed that. Algorithms make decisions, often at speeds humans cannot match. A good example is the stock market, that has been dominated by algorithms for years now.

Computers and algorithms have a clear advantage in processing knowledge. They thrive in a world of vast information (and bureaucracy). They do it better than humans because they have access to databases and the entire internet, processing information faster.

This is the section of the book where Harari also introduces the concept of Nexus: it is the point of contact between a state and a taxpayer before jurisdiction applies.

Why is this important in the digital world?

As algorithms become mightier, however, global tech giants increasingly dodge responsibility. They claim to provide only infrastructure while making billions without paying taxes in the countries where the information originates (and on which their business models depend on).

Harari also highlights the dark side of modern communication technologies. With AI systems and omnipresent surveillance, totalitarian regimes (dreamed of by authocrats like Ceaușescu) become possible.

An example is Iran’s use of facial recognition during the hijab crisis: thousands of women were convicted, and hundreds executed.

Fallible: the network is often wrong

Harari describes the “Homo Sovieticus”. It is a type of person (or behavior) that exists without initiative. He emphasizes that the “self-correction” necessary for stable information networks was completely disabled in Homo Sovieticus.

Networks can also fail when there’s an alignment problem in goal optimization. Consider the famous paperclip factory thought experiment: A company buys an advanced AI to produce as many paperclips as possible. The AI realizes it needs more resources and factories, which humans won’t give up easily. So, it eliminates all humans to access resources. Then, it faces Earth’s limitations and builds autonomous spaceships to harvest resources from other planets. It stops only when the entire universe is filled with paperclips.

The AI optimized its goal, but the goal was poorly defined.

Even if reality doesn’t become that dystopian, defining an “ultimate” goal that isn’t counterproductive is extremely difficult.

There are two philosophical schools that address this issue:

  • Deontology: Associated with Immanuel Kant, this principle says, “Treat others as you would like to be treated.” Sounds good until you ask who “others” are. In Nazi Germany, Jews were excluded from being considered “human.” Would an AI similarly exclude humanity entirely?
  • Utilitarianism: Decisions are morally right if they produce the greatest happiness (or least pain). This seems robust compared to worldview-dependent deontology. But challenges arise in comparing different experiences of happiness or pain. For example: Should a train be diverted to kill an old man with cancer instead of three children? What if the man had only three painful months left? Is that positive or negative?

Democracies: can we still hold a conversation

Liberal democracies, despite their flaws, are the best alternative we have at the moment, because they have strong self-correction mechanisms.

Fundamental principles:

  • Benevolence: The perception that someone acts for the benefit of another person and not opportunistically (e.g., a doctor uses a patient’s medical history for treatment but doesn’t share it with the employer for profit).
  • Decentralization
  • Mutuality: If surveillance exists, it must go both ways (government → people & people → government).

A pivotal moment in AI history was Move 37 by AlphaGo.

For a long time, people believed AI could never surpass humans in Go. AlphaGo proved them wrong.

What’s special: apart from the game’s rules, nothing was programmed. The AI learned from experience (replaying past games, playing against itself). Move 37 is remarkable because Go experts couldn’t understand its meaning. Yet it was the basis for the AI’s victory. The AI had developed a new strategy unknown for over 1,000 years.

Totalitarianism: all power to the algorithms

In totalitarian regimes, AI’s effects are more pronounced than in democracies. On one hand, AI enables unprecedented data processing - something autocrats like Stalin and Ceaușescu could only dream of. Today, it’s possible to monitor every citizen and detect patterns revealing habits, fears, and even thoughts: Are they secret political opponents? Are they truly loyal?

The world George Orwell described in 1984 could become reality in autocratic states with AI.

Early examples exist in countries like China, which introduced a “Social Credit” system. Pro-government actions add points, criticism and undesirable acts subtract points. Benefits vary by score (housing, schools, etc.). The score isn’t tracked only by humans but partly by automated, AI-driven systems.

But AI also poses a risk for totalitarian regimes. Since all information flows to one point (the dictator), the AI only needs to manipulate that person to control the entire state. Harari cites a Roman example: Tiberius became increasingly dependent on Praetorian Prefect Sejanus, who isolated him and controlled the information reaching the emperor, gaining immense power.

Whoever controls the flow of information in a totalitarian state becomes the most powerful person. Normally, that’s the dictator. But it could also be an AI.

The silicon curtain: global empire or global split?

Artificial intelligence could lead to a new division of the world: between those who own the most/best data and can develop strong AI, and those who lack these resources. The latter will become dependent on the former.

In the end, Harari paints a dystopian picture: AI has the potential to wipe out humanity (and possibly all biological life on Earth) if we lose control.

Epilogue

We are still at the beginning, but it’s crucial that decision-makers understand AI’s potential impact now - during its “canonization”. AI is not just another form of information technology (like writing or printing). Its inherent intelligence amplifies both its dangers and its potential benefits.

Final Thoughts

I genuinely enjoyed reading Harari’s book. Yes, it was longer than I expected and at times leaned heavily into historical detail, but it never lost its grip on me.

The narrative stayed engaging, and I walked away with insights that feel highly relevant in today’s digital age.

What struck me most is how Harari connects the evolution of information systems (stories → documents → algorithms → AI) with the way societies organize power. It’s not just history. It’s a lens for understanding the future.

If you’re curious about what the concept of Nexus means in the context of the digital economy, how I interpret Harari’s perspective on AI and governance, and what all this implies for entrepreneurs navigating a world of borderless data and algorithmic decision-making, stay tuned for next week’s post.