Nexus: A Brief History of Information Networks from the Stone Age to AI

Yuval Harari, 2024

Fern Press

Review by Tina Chung

What is this book about?

The book begins with the question, “If we are so wise, why are we so self-destructive?” (Harari, 2024, p. 11). Drawing on historical developments in human-created information technologies, from the Stone Age to artificial intelligence, Harari examines how humanity has often failed to use new inventions wisely. The book explores how information networks, from early communication systems to AI, shape societies, power structures, and knowledge production in both positive and detrimental ways. Additionally, it argues that the ways information is created, organized, and controlled influence whether societies move toward democratic participation or centralized, potentially totalitarian systems. Therefore, Harari emphasizes the importance of the wise use of AI, as well as the need for collective, collaborative action in developing and using AI for the benefit of humanity.

What is the main argument of each part of this book?

Part 1: Human Networks

This book consists of three parts. Part I of the book explores the historical development of human information networks, tracing how communication technologies such as writing systems, the printing press, and radio have shaped societies across time. These technologies are not simply tools for sharing information; rather, they actively influence political systems, cultural narratives, and ideological conflicts. The author emphasizes that large-scale information networks are often shaped by institutions, including bureaucracies and cultural or religious authorities, which determine how information is produced, organized, and controlled.

A central concept introduced in this section would be the idea of the self-correcting mechanism. The author explains that the strength of a society’s self-correcting mechanisms determines its ability to adapt, revise information and knowledge, and respond to existing challenges and things that need to be updated. In systems with strong self-correction, such as scientific communities, knowledge evolves through continuous questioning, critique, and revision, thereby enabling the change in the existing system.

In contrast, weaker self-correcting mechanisms tend to preserve existing power structures and limit change. In such systems, information may be controlled or manipulated to maintain authority, which could lead to more centralized and even totalitarian forms of governance. The author shows that throughout history, both democratic and authoritarian systems have been shaped by how effectively they allow information to be questioned and corrected. Additionally, it argues that information can serve multiple purposes: it can inform, mislead, connect, or control. Therefore, technology should not be understood as neutral. Its impact depends on how it is used within specific social and religious contexts, and how these contexts shape its development.

I think this concept is particularly relevant to contemporary discussions of emerging technology in education. As digital tools and artificial intelligence become more integrated into learning environments, educators must consider whether these technologies support strong self-correcting processes or reinforce existing inequalities and power dynamics underwrite emerging technology. The idea of technoskepticism (Krutka et al., 2020; Pleasants et al., 2023), particularly its emphasis on developing a critical understanding and evaluation of technology use in practice, aligns with this self-correcting mechanism. It encourages educators and researchers to question how technologies influence knowledge production, dissemination, and the promotion of democratic versus centralized, totalitarian values.

Part 2: The Inorganic Network

Part II examines the transition from human-centered communication to what the author describes as inorganic networks, where information flows increasingly involve nonhuman actors such as computers and artificial intelligence. In earlier periods, communication primarily occurred through direct human interaction, relying on oral exchanges within communities to share and spread information and knowledge. The invention of writing introduced new possibilities, allowing information to be recorded and transmitted across time and space through document-based communication (no need for direct human involvement in sharing information).

Further, with the development of computers, communication systems evolved further into computer-to-computer networks, enabling information to circulate with minimal human intervention. This shift becomes even more significant with the rise of AI, which can generate content, produce knowledge, and influence cultural and political narratives independently. AI systems are now capable of composing music, generating images, writing texts, and even creating new forms of cultural and creative expression and solutions, thereby acting as non-humanman enity cabale of doing humanity’s tasks.

While these developments demonstrate the rapidly advancing capabilities of AI, which can now perform some creative tasks once considered uniquely human, the author also raises important concerns about its implications. Although the author argues that AI is capable of producing human-like creative works and represents a nonhuman form of intelligence, I contend that it does not yet surpass human creativity. Harari’s characterization of AI as a nonhuman form of intelligence underscores the critical importance of how we should understand and treat AI, not merely as a past technological invention (e.g., computers or the printing press), but as a powerful force with both potential benefits and harms. Therefore, humanity should be aware of the significant challenges and risks it presents.

I regard that AI-generated output is primarily based on the amalgamation of large datasets rather than the lived experiences of human beings. Moreover, AI itself is a product of human creation; however, humans are not neutral in forming perspectives or making decisions. As a result, AI is not a neutral tool; it reflects the values, assumptions, and biases embedded in its design and the societies in which it is developed. In fact, AI has the potential to shape not only what information is produced but also how it is interpreted and used for the advancement of democratic or opposing values. For instance, the author warns that AI systems, due to their speed and autonomy, can become powerful agents of surveillance and control. They can process vast amounts of data, monitor behavior, and influence decision-making in ways that may not always be visible or transparent. This raises concerns about privacy, autonomy, and the potential for AI to reinforce centralized systems of power. Thus, we should adopt a critical standpoint when using AI systems in order to avoid reinforcing centralized systems of power.

The book does not reject AI or technological innovation. Instead, it calls for a critical and intentional approach to its use and creation. AI can support creativity, efficiency, and new forms of knowledge production, but its development and application must be guided by ethical considerations and a collaborative commitment to the common good.

Part 3: Computer Politics

The final section of the book focuses on the risks of the uncritical use of artificial intelligence and the need for effective governance. AI systems can create, interpret, and act on information at a scale that significantly influences societies. For example, a notable portion of content on platforms like Twitter is generated by bots, showing how AI can shape public and political opinion, reinforce or challenge power structures, and accelerate the spread of misinformation.

Given these risks, the author argues that AI governance cannot be left to individual companies, nations, or isolated groups. When it is left solely to one company, it could unintentionally weaken democratic institutions if unchecked by a diverse array of stakeholders. Instead, it requires coordinated global and national efforts involving multiple stakeholders. This includes keeping humans “in the loop” in AI development and use, as well as establishing regulatory systems to oversee potential risks.

A key argument in this section is the importance of aligning AI systems with human values through intentional design, regulation, and ongoing oversight. The author challenges overly optimistic views of technology, warning that AI does not automatically lead to democratic progress. Instead, it emphasizes the need for strong institutions with self-correcting mechanisms that can monitor, evaluate, and adapt technological systems over time. To support this argument, the author draws on historical examples, such as the decline of war, which resulted not from improved human nature or individuals’ pursuits of goodness but from stronger institutions, laws, and shared norms. Similarly, the responsible integration of AI into society depends on effective governance and collective decision-making.

What are the implications of this book for education?

As I read this book, one of the most important ideas for educators’ professional development is the concept of the self-correcting mechanism. The book shows how societies and institutions use self-correcting processes to shape how information is used and understood. This idea is closely connected to taking a technoskeptical stance toward AI in education. Educators need to critically examine how AI shapes knowledge, influences power, and supports either democratic or more controlling systems.

This insight can be applied directly to education, especially as educators use AI and other emerging technologies. Teachers should develop their own self-correcting practices by regularly reflecting on how they use these tools, evaluating their impact, and improving their use in ways that align with how they intend to utilize technology for the betterment of themselves and society. For example, educators can question the accuracy of AI-generated content, identify potential biases, and consider how AI affects students’ understanding of knowledge and truth.

To do this, educators need to think critically about how they use technology in both their daily and professional lives. This includes understanding how AI systems work, recognizing their limitations, and considering their social and ethical impacts. By doing so, educators can use AI more responsibly and support meaningful learning for both their professional development and their students’ learning.

Connecting this to the final chapter of the book, an important implication for educators is the need for collaboration. As AI systems become more complex, it becomes increasingly difficult to fully understand how their underlying algorithms operate. Because of this complexity, no single individual or institution can fully address the risks associated with AI. Instead, it requires collaboration among diverse stakeholders, including educators, researchers, policymakers, and technology developers. Again, this may position educators as collaborative partners in shaping ethical and responsible AI development and utilization.

Overall, the book makes an important contribution to discussions of technoskepticism (Krutka et al., 2020; Pleasants et al., 2023; Pleasants et al., 2025) by elaborating on both the opportunities and risks associated with AI and technology. It emphasizes the importance of critical engagement, ethical responsibility, strong self-correcting systems, and interdisciplinary collaboration in ensuring that technology supports democratic values and the common good rather than reinforcing non-democratic power structures. In doing so, the author concludes by presenting a potential response to the central question raised at the beginning of the book: “If we are so wise, why are we so self-destructive?” (Harari, 2024, p. 11).

References

Harari, Y. N. (2024). Nexus: A brief history of information networks from the Stone Age to AI. Random House.

Krutka, D. G., Heath, M. K., & Mason, L. E. (2020). Technology won’t save us–A call for technoskepticism in social studies. Contemporary Issues in Technology and Teacher Education, 20(1), 108-120.

Pleasants, J., Radloff, J., & Mueller, A. (2025). Learning to be technoskeptical: Engaging pre-service teachers in critical examinations of educational technologies. Contemporary Issues in Technology and Teacher Education, 25(2). https://citejournal.org/volume-25/issue-2-25/current-practice/learning-to-be-technoskeptical-engaging-preservice-teachers-in-critical-examinations-of-educational-technologies/

Pleasants, J., Krutka, D. G., & Nichols, T. P. (2023). What relationships do we want with technology? Toward technoskepticism in schools. Harvard Educational Review, 93(4), 486-515.