No AI Gods, No AI Masters

Civics of Technology Announcements

Next Tech Talk: The next Tech Talk will be held on October 7 at 8:00 Eastern Time. Register here or on our events page. Come join our community in an informal discussion about tech, education, the world, whatever is on your mind!

Next Book Club: We’re ready Culpability by Bruce Holsinger. Join us to discuss on Tuesday, October 14th, 2025 at 8pm EST. Be sure to register on our events page!

Latest Book Review: The Mechanic and the Luddite, by Jathan Sadowski (2025).

By: Olivia Guest, Iris van Rooij, Barbara Müller, Marcela Suárez

In the years and months leading up to writing Open Letter: Stop the Uncritical Adoption of AI Technologies in Academia and our position piece titled Against the Uncritical Adoption of 'AI' Technologies in Academia, we have struggled to convince some of our colleagues and students of the deskilling impact of these technologies. These trials and tribulations are perhaps well-known to those who agree with us. Yet, they may pass unnoticed by others, or are sometimes even misunderstood by students.

What is our problem with AI technology in education? Why do we need critical perspectives on AI? AI products have frustrating and harmful drawbacks in their varied proposed use cases. They have issues such as a sordid history; shady business practises; a shameful labour and general human rights record; a horrendous pattern of producing misinformation and of corruption of the scientific record; sexist and racist output; and clear harms to the environment through pollution, water and energy consumption, and land use. Therefore, as academics, we have professional responsibility to students to teach them about these issues without corporate interference (Guest et al. 2025; Suarez et al. 2025). We refuse to be mired in terminological squabbles, so herein by AI we mean any displacement technology that is harmful to people, is obfuscatory of cognitive labour, and deskills (as defined in Guest, 2025). 

Setting the Stage

In this short piece, we wish to a) draw attention to the explicitly damaging effect these technologies have in learning and research environments (for more see Guest et al. 2025). The gist is that our employers and responsible colleagues, such as committees in charge of academic conduct, have not trod carefully and thoughtfully when it comes to AI technology in academia, allowing the full-blown normalisation of AI technologies and their introduction into our software systems and educational infrastructure. While framed as so-called ‘tools’, these technologies are rather harmful technological scams. Additionally, we wish to b) elaborate more deeply on the interorganisational issues, which are at play in such contexts and which result in downgrading these aforementioned worries to secondary and tertiary in favour of other false arguments and priorities.

Herein we will cover a little on the now widespread use of LLM-based chatbots, and prior to that image generators, as consumer products which also target students; from 2022 onwards. Our efforts here have been to foster questioning and rejection of these tools in learning settings. And then finally, we will touch on the moment which pushed us over the proverbial edge into writing and sharing the first draft of the open letter. 

The Events and Tipping Point

In the academic year 2022/2023, ChatGPT burst onto an already damaged academic scene. Compromised and eroded because facial recognition software was already being used for surveillance and so-called predictive policing, e-proctoring was already enabling us to spy on our students, and self-driving cars were already a couple of years away for about a decade. In some sense the singularity was already here: our critical thinking was stuck, stale, and stagnant on the exact phraseology that our own Artificial Intelligence Bachelors and Masters programmes was meant to be skeptical of — hype, marketing, and nonsense AI products. This is something we, as seasoned academics, know about from previous AI summers and winters: the false promise of the automated thinking machine to be built in "2 months" (McCarthy et al., 1955, p. 2). For example, Olivia for five years has been teaching students the pre-history of AI and past boom and bust cycles in AI as a Science, in part to try and temper the tide. Each year this got harder as students came with increasingly entrenched beliefs against critically evaluating AI. A situation that was aggravated by our colleagues assigning uncritical reading material authored by non-experts. Additionally, Iris has written several blogposts (van Rooij, 2022, 2023a/b) which prefigure in her reasoning for advancing “critical AI literacy" (CAIL; a term inspired by Rutgers' initiative) — and in proposing that we, as a School of AI, take university-wide responsibility for developing and teaching CAIL. Indeed, Iris teamed up with Barbara to do exactly this.

Meanwhile, many academics not only looked the other way, but ran with it (van Rooij, 2023a). They bent the knee, willingly, knowingly, or otherwise, to the whims of the technology industry. In so doing, they promoted AI as ‘tools’ — as so-called conversational partners, for instance to generate and refine research ideas or for students to improve their assignments as well as many other incoherent and damaging intrusions of AI technologies into our already fragile scholarly ecosystem. Often, such promotion was accompanied by nonsense arguments, such as 'using AI in education is inevitable', or that 'use of AI in education is necessary to teach students to apply it responsibly' (usually without explaining what 'responsibly' in light of harm means). 

In contrast, we cannot look the other way. As the AI summer rolls on with heatwave upon heatwave, we directly experience its damage. We witness severe deskilling to academic reading, to essay writing, to deep thinking, even to scholarly discussions between students, which are all now seen as acceptably outsourced to AI products (Guest, 2025). This is partially why Iris proposed critical AI literacy (CAIL) as an umbrella for all the prerequisite knowledge required to have an expert-level critical perspective, such as to tell apart nonsense hype from true theoretical computer scientific claims (see our project website). For example, the idea that human-like systems are a sensible or possible goal is the result of circular reasoning and anthropomorphism. Such kinds of realisations are possible only when one is educated on the principles behind AI that stem from the intersection of computer and cognitive science, but cannot be learned if interference from the technology industry is unimpeded. Unarguably, rejection of this nonsense is also possible through other means, but in our context our AI students and colleagues are often already ensnared by uncritical computationalist ideology. We have the expertise to fix that, but not always the institutional support.

A case in point is our interactions when a university centre introduced an 'AI feedback tool' for use in written assignments. Marcela informed relevant university-level bodies, and highlighted the severe deskilling potential for both teachers and students. The centre's response followed a typical pattern of asking for relevant technopositive colleagues to be roped in as so-called relevant stakeholders. But who are the relevant stakeholders in a university if not teachers and students? These sorts of responses derail conversation about such decisions that risk severe deskilling, and solidify our university's promotion of AI products, which do not comply with data privacy regulations. In so doing we ignore ethical concerns raised by experts and experienced teachers. 

Finally, in the Summer of 2025, Olivia snapped when a series of troubling events led to a climax of weaponisation of students against faculty. Documents like the Netherlands Code of Conduct for Research Integrity, as well as related codes from around the world, the law in some cases, and our own personal ethical codes, could (if followed and applied) proscribe undue interference in academic and pedagogical practice. However, cases such as these where students are mouthpieces of industry, accidentally or otherwise, and supported by colleagues, are not only particularly worrisome for the harm they cause to the students themselves, but also to the whole of the academic ecosystem. Such situations hollow out our institutions from within, creating bad actors out of our own students, PhD candidates, and colleagues.

Facing the Harms

The university is meant to be a safe space from external influence. Additionally, the right of academic freedom is in place to protect both faculty and students from industry’s creeping power, which often seeks to exercise undue influence over universities. However, with this right comes the responsibility to be upfront about conflicts of interest, and indeed any entanglements with industry, just like how we indicate affiliations and grant numbers on research outputs. It also comes with the ability as academics to reject any such conflicts, for example, to remove ourselves from compromising relationships if we do not consent to them. Violations of these are: our colleagues deciding to implicate us in scientifically questionable conduct; or our ruling bodies deciding for us that our university can outsource IT infrastructure to Microsoft, and not only that but without any possibility to halt AI nonsense mushrooming up in Outlook.

Embracing AI products makes what were once serious transgressions now acceptable and even desirable through normalising: stealing ideas from others, erasing authors, infringing authorship rights; while at the same time harming our planet. In many ways it was inevitable for us to pen an Open Letter, and in fact we may even be too late for some students who submit AI slop and who are not even able to format their references according to a style manual, nor credit authors for their ideas, but nonetheless are in their final year. Any technological future, any academic pursuit of AI, will have to contend with these events. We as critical scholars of AI, as social, behavioural, and cognitive scientists, will have to pick up the pieces left behind by the technology industry’s most recent attack on academia. We only hope there are indeed pieces left; and no matter what we will continue to fight for space for students to hone their skills — think, write, program, research — unimpeded and independently.

The Open Letter: Action

All this has had a visceral effect on us as academics, which spilled out in the Open Letter: Stop the Uncritical Adoption of AI Technologies in Academia — a process that involved also inviting on board other Netherlands-based colleagues who wished to lend their words to the text and sign their names below. At present, anybody from anywhere who agrees can still sign it, but we first wanted to concentrate our efforts on the NL, to really make a difference at our local and national levels. We also hope other countries' academics join in to pressure their respective organisations. A letter in a similar spirit also appears here: An open letter from educators who refuse the call to adopt GenAI in education.

In order to inform and build solidarity with allied colleagues we have captured our counterarguments to the AI industry's rhetoric in a position piece: Against the uncritical adoption of 'AI' technologies in academia.​​​​​​​ In it we analyse misuse of terminology, debunk tropes, dismantle false frames, and provide helpful pointers to relevant work. 

References

Guest, O. (2025). What Does 'Human-Centred AI' Mean?. arXiv preprint arXiv:2507.19960. DOI: https://doi.org/10.48550/arXiv.2507.19960

Guest, O., Suarez, M., Müller, B., et al. (2025). Against the Uncritical Adoption of 'AI' Technologies in AcademiaZenodo. DOI: https://doi.org/10.5281/zenodo.17065099 

McCarthy, J. et al. (1955). A proposal for the Dartmouth summer research project on artificial intelligence. URL: http://jmc.stanford.edu/articles/dartmouth/dartmouth.pdf 

Suarez, M., Müller, B., Guest, O., & Van Rooij, I. (2025). Critical AI Literacy: Beyond hegemonic perspectives on sustainability [Substack newsletter]. Sustainability Dispatch. DOI: https://doi.org/10.5281/zenodo.15677840 URL: https://rcsc.substack.com/p/critical-ai-literacy-beyond-hegemonic

van Rooij, I. (2022) Against automated plagiarism. DOI: https://doi.org/10.5281/zenodo.15866638 

van Rooij, I. (2023a) Stop feeding the hype and start resisting. DOI: https://doi.org/10.5281/zenodo.16608308 

van Rooij, I. (2023b) Critical lenses on 'AI'. DOI: https://irisvanrooijcogsci.com/2023/01/29/critical-lenses-on-ai/

van Rooij, I. (2025). AI slop and the destruction of knowledge. DOI: https://doi.org/10.5281/zenodo.16905560 

Next
Next

Your Health Data are at Risk, and it Isn’t the First Time