Are We Setting Teachers Up to Take the Fall for Our Inevitable AI Problems?
Lots Happening at Civics of Tech!
CONFERENCE!
Our fourth virtual conference will be on July 31 and August 1! Head over to our conference page to learn more. Session proposals will be reviewed soon and decisions will be sent out.
TECH TALK!
Next Tech Talk on July 8th (NOTE that this is the SECOND Tuesday of July): Join us for our monthly tech talk on Tuesday, July 8 from 8:00-9:00 PM EST (GMT-5). Join an informal conversation about events, issues, articles, problems, and whatever else is on your mind. Use this link to register.
BOOK CLUBS!
July Book Club: We’re reading The AI Con, by Emily Bender and Alex Hanna. Join us for a conversation on Tuesday, July 22 at 8:00 PM Eastern Time, led by Charles Logan. Register on our events page here!
Bonus Book Club: We’re also reading Empire of AI, by Karen Hao, and we’ll be holding our book club conversation at the end of our first day of our conference: Thursday, July 31 at 3:00 PM Eastern Time. See more on our events page.
By Jessa Henderson
I recently attended the inaugural meeting of the European Conference on Critical Edtech Studies (ECCES) at the University of Teacher Education (PHZH) in Zürich, Switzerland. The conference’s streams (analysis of AI/edtech in practice; policy, discourse, and government; political economy; learning, pedagogy, assessment, and teacher education; histories and futures; and social justice, sustainability, ethics and rights) provided a range of perspectives, methodologies, and backgrounds. The conference was the brainchild of Mathias Decuypere, Sigrid Hartong, Jeremy Knox, and Ben Williamson and was a testament to the idea that the small sparks we talk about informally with colleagues and friends can become something tangible and meaningful to many. (Much like the Civics of Tech Project!)
One of the first presentations I attended at ECCES was “The longest year: Teaching, learning and living through AI slop” by Carlo Perotta. Carlo began by saying:
“We are living in entropic times (Stiegler, 2018): technological pervasiveness and hype, coupled with other intersecting causes of personal, geopolitical, environmental anxiety are dissipating vital energies and creating a state of political and affective exhaustion.”
Hearing those words I immediately felt a relief, the release of a tension that I’ve been holding without realizing it. Recently in my work, I’ve felt overwhelmed and fearful. Not an existential “AI-is-going-to-end-the-world” type fear but a fear in balancing two conflicting tensions. The tension between the importance of slow science and grounding recommendations for teachers in empirical work with the current moment which feels as though the field is changing too quickly to wait and that teachers and institutions need guidance now. I spent the remainder of the conference acknowledging and grappling internally with this tension, while listening and learning from others in the field.
The one thought that I cannot shake is that teachers are being set up:
Are we setting teachers up to take the fall for future AI risks that institutions haven’t properly flushed out, weaponizing teachers' desire for professional autonomy and agency against them?
I am a former US high school social studies teacher and technology teacher educator. My research focuses on preparing teachers to use technology and data effectively in their instructional practices. I’m an advocate for interrogating the skills and knowledge that teachers and students need to be successful and critical of AI systems. I believe that teachers deserve professional autonomy and that their agency needs to be protected as an educated “human-in-the-loop” with AI systems. And yet, I’m worried that this may swing too far and backfire.
At ECCES, I learned about a recent policy paper from the Department for Education in the UK that states, “Teachers, leaders and staff must use their professional judgement when using these (generative AI) tools. Any content produced requires critical judgement to check for appropriateness and accuracy. The quality and content of any final documents remains the responsibility of the professional who produced it and the organisation they belong to, regardless of the tools or resources used.” I have openly praised AI policy papers and recommendations in the past that have centered and protected teacher autonomy. For example, the US Department of Education’s past work - now buried on the website by the current administration - highlighted teachers as the driver and the human-in-the-loop, but did not assign them sole responsibility.
We see this current trend of personal responsibility playing out in higher education with student use. In a recent critique on generative AI Flenady and Sparrow (2025) discussed that generative AI has a "testimony gap" where there is no one accountable for the activity of the machines or the inaccuracy of its results. They wrote, “A ‘testimony gap’ opens where GenAI systems are taken – despite the limitations of their design – by human agents to be reporting on the world, but cannot be held responsible for mistaken or otherwise epistemically inadequate reports” (p.5). To place the burden of responsibility solely on individual teachers and/or students is the easy way out for institutions and policy makers.
My fear is that the current discourse around teacher freedom will ultimately be used to decrease teacher autonomy and agency. I worry that governments and schools will avoid the difficult task of drafting robust AI policies by using teacher autonomy and agency as the excuse. In doing so they may weigh teachers down in new roles, new knowledge, and new responsibility, in addition to all the roles, knowledge, responsibility, and expectations of the past. Will teachers willingly give up some of their professional freedom in exchange for the ability to focus on the things that brought them into the profession to begin with? In my dissertation research, I asked educators exploratory questions about their excitement and concerns about decision making with AI (Henderson & Milman, 2024). To my surprise, a theme emerged where some teachers were excited to share responsibility and blame. For example, participants reported: "I am excited about having such a useful resource at my disposal. It makes me feel protected.", "...There's also some burden sharing, so it's not completely my error if the algorithm recommends a similarly wrong plan.", and "I'm excited about the ability to rely on something outside of myself (less personal blame).". If we give teachers freedom without protections, will they willingly give up some of that autonomy to feel protected?
There has to be a way that governments and schools can share the burden; that we can allow teachers to have the professional agency that they deserve with AI systems; that teachers are provided education and training to equip them to be successful; and that there are policies in place to protect them while holding technology companies accountable for both the good and the bad from the products they unleash on the world. The focus on teachers as the main guardrail to protect our students and society from the negative effects of AI is too much. What do teachers give up to take on this responsibility? How does the purpose of education change? What do our students lose? I’m not advocating to stop the work of preparing teachers and students for AI, but I am advocating that we consider the larger political, social, and organizational systems that teachers exist within. We must not just prepare, but also protect teachers and students in these ‘entropic times’.
References
Flenady, G. & Sparrow, R. (2025). Cut the bullshit: Why GenAI systems are neither collaborators nor tutors. Teaching in Higher Education, 1–10. https://doi.org/10.1080/13562517.2025.2497263
Henderson, J. & Milman, N.B. (2024, April 11-14). Educator beliefs on AI recommendation systems: An exploratory thematic analysis. [Conference Paper Presentation]. AERA Annual Meeting, Philadelphia, PA, USA.
Stiegler, B. (2018). The Neganthropocene. Open Humanities Press. United Kingdom Department for Education (DfE). (2025, June 10). Generative AI in Education. Policy Paper.
United States Department of Education (USDOE), Office of Educational Technology. (2023). Artificial Intelligence and the Future of Teaching and Learning: Insights and Recommendations. Washington, D.C.