AI is Automating and Legitimating Hidden Curriculum in Schools
Civics of Tech Announcements
Next Tech Talk on May 6th: Join us for our monthly tech talk on Tuesday, May 6 from 8:00-9:00 PM EST (GMT-5). Join an informal conversation about events, issues, articles, problems, and whatever else is on your mind. Use this link to register.
Be sure to join us on Bluesky @civicsoftech.bsky.social and join/follow our Civics of Technology starter pack.
By Marie K. Heath
Recently Melissa Warr and I investigated the intersection of hidden curriculum in schools and the potential of hidden curriculum in generative AI, and I’m delighted to share an overview of the article, Uncovering the Hidden Curriculum in Generative AI: A Reflective Technology Audit for Teacher Educators, in today’s blog. Please also take a gander over to Melissa’s blog to read her summary of our article, and also check out some of her other provocative posts.
Melissa and I combined our approaches to AI research, applying the methodology of technology audit to consider the ways teacher educators and schools may inadvertently legitimate GenAI and its biases as unquestioned truths in education.
In the article, we imagined what it might be to adapt Apple’s (1975) argument on hidden curriculum to generative AI in schools. Apple contended that educators must:
. . . examine critically not just “how a student acquires more knowledge” (the dominant question in our efficiency minded field) but “why and how particular aspects of the collective culture are presented in school as objective, factual knowledge .” How concretely may official knowledge represent ideological configurations of the dominant interests in a society? How do schools legitimate these limited and partial standards of knowing as unquestioned truths? These questions must be asked of at least three areas of school life: 1) How the basic day-to-day regularities of schools contribute to students learning these ideologies; 2) how the specific forms of curricular knowledge reflect these configurations; and 3) how these ideologies are reflected in the fundamental perspectives educators themselves employ to order, guide, and give meaning to their own activity. (Apple, 1975, pp . 354-355)
We were struck by the timeliness, 50 years later, of his critique of “our efficiency minded field” as well as the ways that genAI intersects so neatly into his rebuke of collective culture that masquerades as “objective, factual knowledge.”
As we wrote in our paper,
Apple’s first two questions attend to the ideologies embedded in curriculum, which are passed off as objective and authoritative fact (How concretely may official knowledge represent ideological configurations of the dominant interests in a society? How do schools legitimate these limited and partial standards of knowing as unquestioned truths?). The notion of embedded curriculum is echoed in Benjamin’s (2019) concept of encoded bias. In other words, both schools and algorithms have encoded ideologies of power and oppression into themselves.
Apple’s final three questions of school life focus our attention on the places, spaces, and practices which allow for rapid and institution-wide implementation of the hidden curriculum. In this way, it is analogous to discriminatory design (Benjamin, 2019) and automated inequality (Eubanks, 2018) in that the day-to-day and at-scale implementation of the hidden curriculum, or encoded bias, is what makes it so particularly pernicious and institutionalized. Inquiring into, and putting the pressure of policy and practice on these places and points of uptake allow educators to disrupt the hidden curriculum.
Thus, while hidden curriculum is often used by education scholars to surface the unspoken ideology of schools, and discriminatory design is often used by critical technology scholars to uncover the encoded biases of algorithms, we see them as interrelated. In our study, we combine these approaches to develop technology audit questions. These questions are intended to uncover the entwined hidden curriculum and discriminatory design of GenAI used in education. There are many instances when schools integrate or rely upon AI in their day-to-day administration and pedagogy. For example, as education increasingly turns toward “personalized learning and instruction,” a method for teaching which applies individualized tutoring, teaching, and feedback through algorithms trained on massive quantities of an individual and collective data, hidden curriculum and discriminatory design meld into one.
What we found was that large language models (LLMs) are not explicitly racist or biased, but more insidiously, they are implicitly biased toward white students and against minoritized students. You can read more in our article about the authoritative tone that LLMs used when “talking” (I have to use square quotes here, since LLMs don’t actually talk, but rather, compute) to students it perceived as Black or Latinx. The approaches of LLMs toward minoritized students threatens disproportionate harm to those already vulnerable. It reinforces ideologies of authority, obedience, and hierarchical power structures.
Read the full paper to review specific examples from our research, and read what we propose for the field of teacher education with respect to teaching with and about genAI. Please reach out with questions and comments - we appreciate opportunities to think through this work.
References
Apple M. W. (1975). Ivan Illich and deschooling society: The politics of slogan systems. Social Forces and Schooling, 337–360.
Benjamin, R. (2019). Race after technology: Abolitionist tools for the new jim code. New York: John Wiley & Sons.
Eubanks V. (2018). Automating inequality: How high-tech tools profile, police, and punish the poor. St. Martin’s Press.