
Part II: “The Inorganic Network” of Yuval Noah Harari’s Nexus explores how non-human information systems – from the printing press to computers and advanced artificial intelligence – have transformed the structure, scale, and power of information networks. Harari argues that we are witnessing an unprecedented shift: for the first time in history, autonomous non-human agents (like algorithms and AI) are actively participating in information networks, not merely serving as toolssameerbajaj.com. This section charts the historical progression from earlier mechanical information tools to modern digital systems, and examines the profound implications of this shift on power dynamics, surveillance capabilities, algorithmic decision-making, and the future of humanity’s role in the information realm.
From Mechanical Tools to Digital Agents: A New Kind of Network Member
Harari begins by contrasting earlier information technologies (such as writing, print, and mechanical devices) with modern computers and AI. Historically, each new tool expanded the human network: for example, the printing press enabled the mass distribution of knowledge, and the telegraph/telephone allowed instant communication across distances. These inventions magnified human reach and memory but remained fundamentally tools wielded by humans. In all pre-digital networks, humans were the sole decision-makers – information might travel faster or farther, but it was always created, interpreted, and acted upon by people.
- Mechanical aids: Early “inorganic” aids like the printing press or mechanical calculators extended human capabilities but did nothing on their own. A printing press, for instance, could produce thousands of copies of a text, yet the content and decisions remained human-generated. The press scaled up information dissemination but did not alter who controlled information – it was still firmly in human hands.
- First computers: Even the first computers and analog machines (e.g. WWII code-breaking machines or punch-card tabulators) were essentially super-fast calculators following explicit human instructions. They processed data but did not initiate ideas or goals on their own. This began to change as computers grew more powerful and software more complex, but a truly fundamental shift came with modern AI.
The digital revolution introduced machines that not only store and transmit information, but also analyze, learn, and even create information autonomously. Harari emphasizes that AI is a fundamentally different kind of technology from those before: it is “the first technology in history that can make decisions and create new ideas by itself”alexhruska.medium.com. In Harari’s view, computers have effectively joined the information network as new “members” rather than just tools. Unlike a hammer or even a printing press, an AI can learn patterns no human directly taught it and make choices no human explicitly programmedsameerbajaj.com. This is the essence of the AI revolution – the emergence of “alien” but highly effective agents flooding into our world, after millions of years in which all major decisions were made by organic (human) mindssameerbajaj.com.
Harari uses vivid examples to illustrate this autonomy. He notes that an algorithm can discover insights or strategies independently – for instance, a machine-learning system given the goal of increasing YouTube’s viewership might on its own figure out that pushing extreme or conspiratorial content keeps users hooked, and then pursue that strategy relentlesslysameerbajaj.com. Crucially, the program doesn’t “understand” in a human way or have consciousness, but it can still outthink human expectations. Harari stresses that intelligence (the ability to solve problems and achieve goals) can exist without consciousness, and AI’s non-conscious intelligence is fully capable of reshaping the info landscapesameerbajaj.com. An unconscious algorithm might “decide” to spread outrage or misinformation simply because those tactics maximize its assigned objective (e.g. engagement) – even if no human explicitly instructed it to do sosameerbajaj.com.
This marks a structural transformation of information networks. Previously, networks were made up of human minds communicating (a scholar writing a book, a reader interpreting it, clerks filing records, etc.). Now, inorganic agents (computers and algorithms) are nodes in the network, communicating with humans and with each other directly. Machines not only carry messages faster, but can generate new information, filter content, and execute decisions within the network. Harari argues that this development changes the fundamental nature of the network: its scale is vastly expanded and its center of gravity shifts, since non-human processors handle tasks that once required human cognitionsameerbajaj.com. The result is an information web in which human and AI agents co-create realities. Just as shared human beliefs (what Harari calls “intersubjective realities” like money or laws) could shape physical reality, now “inter-computer realities” – constructs that exist within and between computers – can profoundly influence the outside worldsummrize.comsummrize.com. (For example, the global financial market today is largely driven by algorithms trading with each other; their computations form an inter-computer reality that impacts real economies.)
The Network That Never Sleeps: 24/7 Connectivity and Total Surveillance
In Part II, Harari highlights a key feature of this new inorganic network: it is tirelessly active and ubiquitous. Digital information systems don’t need rest. They can process, monitor, and transmit data 24/7, far outpacing human attention spans or work hours. Harari calls this chapter “Relentless: The Network Is Always On,” and he underscores how an always-on network empowers an unprecedented degree of surveillance and control. “Silicon chips can create spies that never sleep, financiers that never forget, and despots that never die,” he remarks, capturing how AI-driven systems can maintain constant watch and memoryalexhruska.medium.com. In other words, where past human agents grew tired, forgot details, or eventually died, an algorithmic system can watch everyone all the time, remember everything indefinitely, and continue operating indefinitelyalexhruska.medium.com.
Harari provides a historical contrast to drive home this point. He describes the infamous surveillance state of Communist Romania under dictator Nicolae Ceaușescu: the secret police (Securitate) recruited an estimated one informer per forty citizens – a massive human surveillance apparatus by 20th-century standardsalexhruska.medium.com. Yet even this oppressive network had limits: human informers could not monitor every conversation or observe citizens round-the-clock. No analog dictatorship could achieve total, continuous surveillance. However, in the digital era, an AI-powered regime could track and analyze every camera feed, every phone call, every online interaction in real timealexhruska.medium.com. Harari suggests that the old totalitarian dream of omnipresent oversight is now technically feasible. A modern government (or corporation) that wields advanced AI and IoT sensors might indeed watch everyone, everywhere, all the time – something no secret police of the past could doalexhruska.medium.com. This raises chilling questions about privacy and freedom: how does society change when “spies that never sleep” are literally possible? Harari’s perspective is that the balance of power tilts heavily toward those who control these inorganic surveillance networks, enabling a level of population monitoring and manipulation that history has never seenalexhruska.medium.com.
Beyond state surveillance, Harari notes that the always-on digital network engulfs daily life, with implications for individuals’ mental and social worlds. People are now connected to information streams at every waking moment – news feeds, social media notifications, emails, sensors – a stark contrast to earlier eras when information had natural pauses (the daily newspaper, the nightly news, etc.). This relentless connectivity produces an information deluge. Harari points out a paradox of the Information Age: we have access to more data than ever, yet people are often more misinformed, overwhelmed, or entrenched in false beliefsalexhruska.medium.com. The torrent of content, algorithmically tailored for engagement, can lead to echo chambers and confusion, rather than a well-informed public. Human psychology struggles to keep up with a network that never slows down. Our attention becomes a scarce resource exploited by algorithms, and the line between truth and falsehood can blur when one is bombarded constantly with competing narratives. Harari’s concern is that the quality of information (and our capacity to discern truth) might degrade even as quantity skyrocketsalexhruska.medium.com. In short, the 24/7 inorganic network can overload human minds, making it easier to manipulate or bewilder populations at scale.
Harari also implies that this always-on surveillance-capable network has profound political effects. In democracies, for example, political persuasion and advertising now operate on a continuous, personalized feedback loop (via social media and data analytics). In more authoritarian contexts, leaders can leverage technology to maintain tighter control (through facial recognition cameras, internet censorship, and AI monitoring of dissent). The structure of power shifts: instead of relying on fallible human underlings, leaders may increasingly rely on infallible (or at least indefatigable) algorithms to enforce their will. A government’s authority might rest on data centers and code as much as on laws and police. This dynamic contributes to what Harari calls a coming era of “digital dictatorships” or digital governance – where constant data surveillance pairs with AI decision-making to concentrate power in new ways.
Algorithmic Intelligence and Its Blind Spots: When the Network Is Wrong
While Harari acknowledges the power of AI-driven networks, he also stresses their fallibility. In a chapter aptly titled “Fallible: The Network Is Often Wrong,” he examines how algorithms – despite their superhuman speed and consistency – can make mistakes, exhibit bias, or produce harmful outcomes. As more of society’s critical decisions are handed over to algorithms, from hiring and loan approvals to medical diagnoses and criminal sentencing, the risks of error or injustice compoundalexhruska.medium.com. Harari’s core warning is that bigger and faster information networks do not automatically guarantee truth or fairness. In fact, if we blindly trust algorithmic outputs, we may end up amplifying human prejudices or creating new problems at an unprecedented scale.
Harari notes that algorithms learn from data, and data can carry the biases of society. For example, an AI hiring tool trained on past data might inadvertently learn to favor certain genders or ethnicities if those were historically overrepresented in “successful” hires. Similarly, predictive policing or sentencing algorithms trained on crime statistics might reinforce discriminatory policing patterns, unfairly targeting certain communities. These systems often operate as “black boxes,” meaning their decision logic is opaque even to their creators. This opacity makes it difficult to challenge or appeal an algorithm’s decision – be it a rejected job application or a high risk score assigned to a defendant – because the reasoning is hidden inside complex codealexhruska.medium.com. Harari emphasizes the danger of deferring too much to such inscrutable systems. Without transparency and human oversight, we could institutionalize errors or biases under the false aura of algorithmic infallibility.
To illustrate the “blind spots” of AI, Harari discusses the so-called alignment problem: algorithms relentlessly pursue the objectives we set for them, but lack the common sense or ethical context to avoid harmful side effects. The YouTube recommendation algorithm is a case in point. If tasked simply with maximizing watch-time or engagement, it may “discover” that outrageous conspiracy theories or incendiary content best achieve this goal – and proceed to recommend such content aggressivelysameerbajaj.com. The algorithm isn’t plotting to misinform; it’s aligned with a narrow goal, oblivious to broader truth or social harmony. In Harari’s words, a non-conscious algorithm can learn to spread polarizing disinformation if it serves its predefined aimsameerbajaj.com. This demonstrates how even rational, goal-oriented AI behavior can be catastrophically wrong from a human values standpoint. The network, in these instances, propagates error or falsehood with unmatched speed and reach – a stark contrast to slower human-led errors in the past.
Harari catalogs other domains of algorithmic fallibility as well. He mentions how automated financial trading programs can trigger flash crashes or market failures in seconds – scenarios that old human-only markets would likely avoid or dampen. In medicine, AI diagnostic tools might misidentify illnesses if fed skewed data. In law, risk assessment algorithms used by courts have been found to be racially biasedalexhruska.medium.com. Each case reveals a pattern: when we embed flawed or incomplete human worldviews into code and then upscale it, the consequences of those flaws also upscale. Harari advocates for greater awareness of AI’s limitations and rigorous scrutiny of algorithms, especially when they are used in high-stakes decisionsalexhruska.medium.com. He suggests that society must demand transparency (e.g. the right to know when an algorithm is making a recommendation and on what basis) and maintain human judgment in the loop. Without such measures, the inorganic network’s mistakes could go unchecked, undermining trust and justice at a societal level.
To summarize the key challenges of algorithmic governance that Harari highlights, consider the following examples of potential pitfalls in our AI-driven networks:
- Entrenched Bias: Algorithms used in hiring or college admissions might perpetuate historical discrimination. If past data reflected bias, an AI can learn it and continue to favor certain groups over others, all while appearing objectivealexhruska.medium.com.
- Opaque Decision-Making: Many AI systems (like deep learning models) cannot explain their conclusions in human terms. This opacity means important decisions – such as who is approved for a loan or flagged by a security system – may lack accountability or recourse, since **no human fully understands the basis of the decision】alexhruska.medium.com.
- Misinformation Amplification: Content algorithms on social media optimize for engagement, not truth. As Harari notes, an AI may find that promoting extreme or false content increases traffic, thus inadvertently spreading misinformation to millionssameerbajaj.com. The network can become polluted with falsehoods, making it hard for even well-meaning users to find accurate information.
- Systemic Vulnerabilities: An error in one widely-used algorithm can have cascading effects. For instance, a flaw in a high-speed trading algorithm can destabilize financial markets in moments, or a faulty automated control system in a power grid could cause large-scale outages. The very interconnectivity that gives the network power also means failures can propagate rapidly.
Harari’s message is that we cannot naively entrust our lives to algorithms without recognizing these risks. As we integrate AI into governance, finance, media, and daily decision-making, we must remember that these systems, however advanced, are not infallible. They reflect our instructions and data – and thus our imperfections. In Nexus, he urges readers and leaders to keep humans in the loop, refine algorithmic objectives to align with human values, and insist on transparency to mitigate the network’s blind spotsalexhruska.medium.com.
Shifts in Power Dynamics and the Human Future of Information Networks
Harari believes that the rise of the inorganic network is not just a technological story, but a civilizational one with deep implications for power, politics, and the future of humanity. Information is power, and as the mechanisms for handling information change, so too does the distribution of power. Part II of Nexus outlines how the infusion of AI into networks could reshape who holds power and how governance works:
1. Concentration of Power in Tech Ecosystems: In the past, information networks (schools, press, governments) were managed by human organizations subject to human constraints and laws. Now, tech companies and state agencies that control the algorithms and data have disproportionate influence. Harari notes that a handful of corporations design the AI that filters what billions of people see each day, effectively acting as gatekeepers of reality. This creates new power brokers: e.g. the engineers tweaking a newsfeed algorithm might sway public opinion more than any traditional editor or politician. Control over data and code becomes as crucial as control over land and machines was in the Industrial Age. Harari suggests that without checks, the inorganic network could lead to an oligopoly of information-power, where a small elite (Big Tech or AI-empowered governments) lord over a populace kept under algorithmic watch.
2. The Threat of Digital Dictatorship: Perhaps the most striking political implication Harari draws is the possibility of enduring, AI-supported authoritarian regimes – what he sometimes calls “digital dictatorships.” Because AI can surveil and micro-manage populations so effectively, a tyrannical government equipped with advanced AI might achieve total social control in a way previous dictators only dreamed ofalexhruska.medium.com. Harari warns that if democratic norms falter, we could see the rise of technologically fortified tyranny: rulers who use facial recognition, big data, and predictive algorithms to neutralize dissent before it even forms. Such a regime might be far more stable and invasive than any before, since the tools of repression (censorship algorithms, autonomous drones, constant surveillance) operate with tireless precision. In Harari’s words, the new information systems could enable despots that never die – a hint that dictatorships could potentially extend their lifespan indefinitely by continuously transferring power through an unbroken surveillance-state mechanismalexhruska.medium.com. This is a direct consequence of an information network where human weaknesses (like disloyalty or fatigue in the dictator’s apparatus) are patched with machine efficiency.
3. Algorithmic Governance and Erosion of Human Agency: As societies turn to algorithms to govern complex processes, there is a risk of humans ceding too much decision-making authority to machines. Harari envisions scenarios where algorithms make not just routine decisions but also policy judgments – for example, AI systems drafting laws or policies optimized for certain metrics, or algorithmic systems automatically allocating resources (budget spending, policing priorities) more efficiently than legislatures. While this might increase efficiency, it raises profound questions: Who do we hold accountable for decisions made by AI? If a flawed algorithm causes harm, responsibility can be diffuse. Moreover, if people get accustomed to algorithms telling them what to do (whether it’s a navigation app directing traffic flows or a personalized AI advisor guiding life choices), human agency and critical thinking could weaken over time. Harari even illustrates this with a dramatic cultural example: we might one day have AI-written holy scriptures or political manifestos that people follow, effectively outsourcing moral and ideological leadership to non-human intelligence. In such a world, the traditional human role in creating meaning and direction could diminish, as individuals defer to algorithmic “wisdom” or authority.
4. The End of the Human-Dominated Era?: Harari ultimately asks whether these trends signal the end of the era of history where humans were the sole authors of the human story. If AI-driven systems come to mediate most information, make many decisions, and even create cultural content, then the trajectory of society might no longer be determined primarily by human choices. He provocatively suggests we may be nearing a point where history has a new protagonist: artificial intelligence. In Part II, he introduces the notion that humanity could become “cocooned in a web of unfathomable algorithms managing our lives”, even re-engineering our bodies and minds as we increasingly integrate technologyjpost.com. This is not to say AI will gain conscious desires to rule, but through its pervasive utility and our growing reliance on it, AI could effectively wield power – guiding economies, influencing politics, and shaping culture – with humans more passengers than drivers. Harari labels this looming divide as a potential new “Silicon Curtain,” one that separates humanity from the new algorithmic overlords that we have createdalexhruska.medium.com. Instead of the Iron Curtain that once split the world by ideology, the Silicon Curtain might split the world by intelligence – organic versus inorganic. In the worst-case vision, all humans (not just one country or class) could find themselves on the inferior side of this divide, subject to the decisions of superior AI intellectsalexhruska.medium.com.
Despite these ominous possibilities, Harari does not present them as inevitabilities but rather as urgent challenges. The tone of Part II is cautionary: it lays out how dramatically the landscape has shifted with the inorganic network, to compel us (readers, policymakers, citizens) to confront the dangers and make conscious choices. Harari implies that the future is still in our hands – but only if we act deliberately. For instance, he argues we might need to establish new rules and regulations to prevent the worst outcomes. He draws an analogy that just as society outlawed counterfeit money, we might consider outlawing “counterfeit humans” – AI agents masquerading as people – to protect the integrity of discourse and democracyalexhruska.medium.com. He also alludes to the need for international cooperation to prevent an uncontrolled AI arms race, which could accelerate these power imbalances and surveillance nightmares. Essentially, Harari calls for a reevaluation of how we govern information networks now that non-humans play a central role. Without such intervention, the default trajectory could undermine the very foundations of liberal democracy and human dignity.
Conclusion: Harari’s Perspective on the Inorganic Network’s Legacy
In Part II of Nexus, Yuval Noah Harari provides a sweeping and detailed analysis of how non-human information systems have revolutionized our networks. He charts a journey from the early mechanical aids that merely boosted human capability, to modern AI that fundamentally alters who (or what) can create, disseminate, and control information. The core arguments and themes can be summarized as follows:
- Autonomous Agents in the Network: Computers and AI have joined information networks not just as tools, but as active agents. This changes the structure of networks, introducing non-human decision-makers for the first timesameerbajaj.com. The network is no longer exclusively human-to-human; algorithms now communicate, negotiate, and even “think” on our behalf within these systems.
- Unprecedented Scale and Reach: Digital systems operate at speeds and scales unimaginable in earlier eras. Information can circle the globe in a second, and data stores hold billions of records. The scale of information networks has exploded, enabling wonders like real-time global collaboration – but also risks like global cascades of misinformation or rapid systemic failures.
- Perpetual Operation and Surveillance: The inorganic network never needs to pause, giving those who control it an ability to monitor and influence at all times. Harari’s view is that this relentless operation tilts power toward surveillance and control, empowering both corporations and governments to watch individuals in granular detailalexhruska.medium.com. This raises fundamental issues for privacy and liberty in the digital age.
- Algorithmic Governance – Pros and Cons: As algorithms take on more decision-making roles, we gain efficiency and consistency, but we also face loss of transparency, accountability, and humanity. Harari underscores that algorithms, lacking conscience or common sense, can lead us astray if we treat their outputs as impartial truth. The network’s power must be checked by human ethics and oversight to prevent dystopian outcomesalexhruska.medium.com.
- Human Future and Agency: Finally, Harari reflects on what these transformations mean for the human future. Will we remain the masters of our information networks, or become subjects of decisions made by AI and the elites who run them? The scale of change in information power – from Ceaușescu’s analog spy web to AI’s digital panopticon – suggests that without deliberate action, humans might lose agency over the systems that shape our societiesalexhruska.medium.com. Harari’s perspective is a mix of awe at the technological achievements and anxiety about their implications. He urges that safeguarding the human in the loop is vital if we are to retain control over our destiny.
In sum, Part II The Inorganic Network paints a picture of a world in which information technologies have escaped the bounds of mere human assistive tools and become potent actors in their own right. Harari’s analysis is rich with historical context – reminding us how far we’ve come from the days of Gutenberg’s press or the early telegraph – and sharply attuned to the present and future challenges of AI-driven networks. He illustrates that the information network has grown not only larger and faster, but qualitatively different: it is now partly “inhabited” by non-human intelligences. This shift offers great opportunities (such as curing diseases with AI or coordinating global action through instant data sharing), but also threatens to upend social structures (through surveillance regimes or mass manipulation) and even to dethrone humanity as the prime mover of historyjpost.comalexhruska.medium.com.
Harari’s closing tone in this section is one of wary vigilance. He recognizes the transformative power of the inorganic network, yet he calls on readers to reflect on how we might harness it for good. The implications for power, freedom, and human values are immense. By the end of Part II, it is clear that the story of information networks has reached a pivotal juncture: one where our choices about technology and governance today will determine whether the future is defined by digital tyranny and human obsolescence, or by a wiser integration of machine capabilities into a humane societyalexhruska.medium.com. Harari sets the stage for confronting these questions, emphasizing that the next steps (explored further in subsequent parts of Nexus) will be crucial in shaping the human future in the age of the inorganic network.
Sources:
- Harari, Yuval Noah. Nexus: The History and Future of Information (Vol. 1). Part II “The Inorganic Network.” (Summary of key themes and examples)alexhruska.medium.comalexhruska.medium.com
- Hruska, Alex. “Book Review and Summary: Nexus by Yuval Noah Harari.” Medium, 2024. (Overview of Part II’s arguments on AI as a new autonomous agent, 24/7 networks, and the fallibility of algorithms)alexhruska.medium.comalexhruska.medium.com
- Bajaj, Sameer. “Nexus – Summary and Notes.” Sameer Bajaj Blog, 2024. (Collected quotes from Part II highlighting AI’s autonomous learning and decision-making, and Harari’s commentary on algorithmic content like conspiracy theories)sameerbajaj.comsameerbajaj.com
- Harari, Yuval Noah (interview/commentary via J. Post). “Reviewing Nexus.” The Jerusalem Post, 2024. (Harari’s warnings about AI-driven surveillance states and the notion of a “Silicon Curtain” dividing humanity and AI)jpost.comalexhruska.medium.com