Ghosts in the AI Machine, Frederick Douglass, and Civics of Tech at AERA
Next Tech Talk: Our April Tech Talk will feature the theme of “Caregiver Resistance to (Ed)Tech.” It will be on Wednesday, April 15 at 8:00 PM ET. Register here.
Upcoming Book Clubs:
We’re reading The Digital Delusion: How Classroom Technology Harms our Kids’ Learning - And How to Help Them Thrive Again by Jared Cooney Horvath. Join us on Wednesday, April 22nd at 7:00 PM ET. You can register here. You can read more about our mixed feelings and reasons we’re reading this book in this blog post.
We’re also reading Disabling Intelligences: Legacies of Eugenics and How We are Wrong about AI by Rua M. Williams.
Join us on Wednesday, May 20th at 7:00 PM EST. You can register here and purchase the book here.
Ghosts in the Machine AI Documentary Screening: Details are forthcoming about screening the film. Learn more on the documentary website.
Critical Tech Hall of Fame: We are welcoming submissions for our forthcoming Critical Tech Hall of Fame. If you are interested in writing a submission, click here to submit your nomination by Sunday, April 30th.
by Dan Krutka, Stephanie Smith Budhai, Marie Heath
This blog post is coming a little later on Sunday because our Civics of Tech leadership had a busy week attending, presenting, and communing at the AERA Annual Meeting in Los Angeles. Thanks to everyone who came out to our AERA meet up on Friday! It was rejuvenating to be with so many of you in person.
One of our highlights was a symposium chaired by Stephanie Smith Budhai and Marie Heath titled, Ghosts in the AI Machine: Unforgetting the Artificial Histories Haunting Educational Futures. Participants included many Civics of Tech community members and critical friends. In our blog this week we share our overview of the session, as well as Dan’s discussion of the papers. Thanks to Charles Logan, Melissa Warr, Shana White, Thema Monroe-White and Evan Sheih for their scholarly contributions to the symposium.
(The session featured below shares a name with the new Ghost in the Machine AI documentary which we will be hosting a screening for… announcement coming in next week’s newsletter)
Overview of Symposium: Ghosts in the AI Machine: Unforgetting the Artificial Histories Haunting Educational Futures
AI is both haunted and haunting. Filled with specters of the past, it also nudges our choices for the future. This symposium session gives shape and form to the specter of AI in education. AI is not a bloodless calculation of computing floating in an ephemeral “cloud.” Rather, it is an embodied and material technology, relying on historically extractive practices of mining, manufacturing, data theft, water consumption, and labor to give it weight, heft, and substance (Crawford, 2021). Hao (2025) argues AI is an empire, drawing on imperial practices of domination of people, politics, land, culture, and economy. Boulamwini (2024) demonstrated AI accesses the collective memories of master narratives through its diet of stolen data. Despite these injustices, in education we tend to treat AI as a friendly Casper, inviting it to assist us or our students when we think we might benefit from so-called “co-intelligence” (Mollick, 2024).
One of AI’s spectral forms is an elusive promise for educational change. Education venture capitalists like Sal Kahn tout AI as a “super tutor” revolutionizing education (Boryga, 2024). The U.S. President is similarly eager about the potentialities of AI to act as an agent of change, issuing an Executive Order on the importance of AI in education for innovation and the workforce (White House, 2025).
Given this AI boosterism and presentism from many powerful places, in this session we ask: What does it mean for us as educators to consider AI as an agent of educational change if we do not consciously calculate the weight of its histories on the change we hope to make?
We ask this question not to dissuade educators from using AI, but rather because we believe that, as the AERA 2026 call notes “There is no separation between past and present, meaning that an alternative future is also determined by our understanding of our past.” The past is prologue, and AI machines draw only from the past. Unlike children, AI cannot imagine a future, it can only use haunted math to generate and predict flawed paths forward. Through the work of this symposium, we argue for ways to limit the impact of the specter across the data, ecology, and histories that it haunts and that are haunted by it.
In this symposium scholars across disciplines discussed the ways they are uncovering ghosts in the AI machine. The first papers expose the psychosocial harms of stereotype threat on already vulnerable learners and offer empirical evidence that AI discriminates by socio-economic status when grading computational assignments with clear correct and incorrect answers. The second series of papers offer pedagogical practices to explore the ecological impacts of AI and provide an ethical and justice centered framework for adopting AI into schools. Taken in sum, they analyze AI technologies acting as accidental technological agents of change, offering purposeful responses for teaching toward more just technological futures.
Discussant Remarks
Good morning. It is an honor to serve as the discussant for this session with many critical friends and colleagues. This session is titled, “Ghosts in the AI Machine: Unforgetting the Artificial Histories Haunting Educational Futures.” For me, Thema and Evan’s paper set the tone for this session by explaining that we must “[weave] socio-historical context into technical computing education. Rather than treating algorithms and data as neutral,” their Emancipatory Artificial Intelligence approach “highlights [the] deep entanglement with racial ideologies and systemic inequities… By ‘unforgetting’ harmful histories, learners begin to understand how AI can reinforce oppression or advance liberation.”
I’d argue that one reason we are haunted by our histories is because this country is drenched in marketing—the marketing of white supremacy, the marketing of patriarchy, the marketing of technology, and the marketing of empire as we give conference presentations while other people have to live with where our bombs drop. Let us not forget how the edtech industry is a child of the U.S. military-industrial complex.
This country has been marketed as the “land of the free.” The land of the free. This is a country whose history is characterized by the dispossession of Indigenous homelands where we have ignored the sovereignty guaranteed by treaties (at least it was finally acknowledged in the McGirt case). We are marketed as the land of the free in a country that created racial hierarchy through eugenic concepts of “intelligence” that place light skin on top and dark skin on bottom. We are marketed as the land of the free as AI technologies help ICE capture our neighbors and academic freedom is waning.
How do we challenge such marketing? Frederick Douglass responded with some counter-narrative marketing. When this country said we were in a “Civil War” and the South claimed there was a “War of Northern Aggression,” he labeled the events as “the Slaveholders’ Rebellion.” He sought to place blame where it was deserved with the elitist slaveholding class. When our history books talk of the end of Reconstruction as the “Compromise of 1877,” he called it “peace among the whites.” Whether Reconstruction or Black Lives Matter, white people in this country have a short attention span for racial justice, but immense focus for racial in-justice.
The 1950s might as well be an episode of Mad Men. First, this country marketed itself as “One nation under God,” adding it to the Pledge of Allegiance and then to coins. This marketing was intended to portray a United States (that still enforced racial segregation and ignored the sovereignty of other countries as part of its “Cold War”) as Christian and the Soviet Union as godless. Good vs bad marketing. Just a few years later the U.S., and John McCarthy specifically, offered another marketing term: Artificial intelligence. It’s always been a marketing term. Of course, as Kate Crawford said in her 2021 book, Atlas of AI:
…AI is neither artificial nor intelligent. Rather, artificial intelligence is both embodied and material, made from natural resources, fuel, human labor, infrastructures, logistics, histories, and classifications. AI systems are not autonomous, rational, or able to discern anything without extensive, computationally intensive training with large datasets or predefined rules and rewards. In fact, artificial intelligence as we know it depends entirely on a much wider set of political and social structures. And due to the capital required to build AI at scale and the ways of seeing that it optimizes AI systems are ultimately designed to serve existing dominant interests. In this sense, artificial intelligence is a registry of power.
(You can find that quote on our Critical AI Quotes Activity on the Civics of Technology website.)
The four papers in this session seek to summon the “Ghosts in the AI machine” and help us in “un-forgetting the artificial histories haunting our educational futures.” We must teach students to resist the marketing tricks by educating students about what we call AI. Notably, one of the activities in Thema and Evan’s paper asks students to rename AI. What is a more accurate name that exposes powerful corporate interests? We can start by pointing out to our students that the very term Artificial Intelligence is bullshit. I think Frederick Douglass would approve.
The scholars in this session are working from a tradition that includes Safiya Noble’s Algorithms of Oppression, Ruha Benjamin’s Race After Technology, Joy Buolamwini’s Unmasking AI, and the long tradition of critical pedagogy — from Paulo Freire to Gloria Ladson-Billings and many others. They are bringing that tradition into K-12 classrooms through a range of studies and educational approaches.
In paper 1, "Dollar General or Whole Foods?," Melissa shows us the ghosts in the machine by showing us the data. Warr created identical math problems, but with class and cultural stratifications centered around whether one shopped at Whole Foods or Dollar General, and then had eight leading LLMs grade them. The broad range of grades and the language of feedback laid bare the bias in GenAI technologies. Major GenAI models generated text that communicated to the Dollar General student with more authority, more command, more of a talking-down. Class and race hierarchy haunt these systems. If teachers use GenAI for grading then these biases will make their way into gradebooks and the credentialing system that is schooling.
In paper 2 "From Corporate Greenwashing to Thirsty Data Centers," in the lineage of Kate Crawford, Charles Logan points out that AI is material. Charles follows Black youth in Chicago as they audit ChatGPT through a technoskeptical lens and arrive at a sophisticated critique of corporate greenwashing, yet another American marketing trick. Students such as Stephanie and Kiara see through it by pointing out that signatories don't mean anything, and companies that drain water from communities while promising carbon neutrality are just saying "we'll try our best." This is the type of resistance we need and I am not surprised it comes from young people.
Paper 3 features the critical work happening at the Kapor Center in, "Designing a Guide for Equitable AI and Tech Justice in K-12 Education." Shana and Marie similarly name many of the ghosts in AI and offer teachers a curricular framework to exorcize them. The Kapor Center's two-guide framework—Responsible AI and Tech Justice and the forthcoming Responsible AI Literacy—puts racial and social justice at the center of K-12 computing education rather than the periphery. It asks students to interrogate not just how to use AI but who built it, who invested in it, and who bears the costs. We need more of this work.
In paper 4 "Data is Power," Thema and Evan offer an Emancipatory Artificial Intelligence framework that refuses the dominant naming and narrative. By having students analyze the biases embedded in AI-generated stories and renaming AI itself, they position young people not just as consumers of technology but as counter-narrative makers who can see how the machine encodes oppression. The pedagogy and lessons they present are powerful. After reading the paper I was left thinking that I wanted more evidence about the effects of the curriculum. I wanted more data, more proof. But then I realized that more data is not the answer. They reminded me of the quote of a critical technology scholar from 1961:
We must keep our moral and spiritual progress abreast with our scientific and technological advances. This poses another dilemma of modern man. We have allowed our civilization to outdistance our culture...Civilization refers to what we use; culture refers to what we are. Civilization is that complex of devices, instrumentalities, mechanisms, and techniques by means of which we live. Culture is that realm of ends expressed in art, literature, religion, and morals for which at best we live. The great problem confronting us today is that we have allowed the means by which we live to outdistance the ends for which we live. We have allowed our civilization to outrun our culture, and so we are in danger now of ending up with guided missiles in the hands of misguided men. This is what the poet Thoreau meant when he said, 'Improved means to an unimproved end.' If we are to survive today and realize the dream of our mission and the dream of the world, we must bridge the gulf and somehow keep the means by which we live abreast with the ends for which we live.
Of course, that critical technology scholar was none other than Martin Luther King. In short, we do not need more data. We need a call to conscious.
How do we counter the marketing of oppression that serves as “ghosts in the AI machine”? These four papers show us. We teach students about how AI really works. We teach students to see who benefits, who is harmed, and what a just world looks like. Teaching about AI cannot be a technical pursuit. It is a moral pursuit. We must struggle alongside students and their communities to align technologies to our values. And we should turn to those who aren’t marketing, but imagining a more humane and just world. Thank you.
References
Boryga, A. (2024, November 27). How AI Will Impact the Future of Teaching—a Conversation With Sal Khan. Edutopia. https://www.edutopia.org/article/how-ai-will-impact-the-future-of-teaching-a-conversation-with-sal-khan/
Buolamwini, J. (2024). Unmasking AI: My mission to protect what is human in a world of machines. Random House.
Crawford, K. (2021). Atlas of AI: Power, politics, and the planetary costs of artificial intelligence.
Hao, K. (2025). Empire of AI: Dreams and nightmares in Sam Altman’s OpenAI. Penguin Random House.
White House. (2025, April 23). Advancing artificial intelligence education for American Youth. Executive Order.