From Access to Equity: Open Education in the Age of AI

Lots Happening at Civics of Tech!

CONFERENCE!

Our fourth virtual conference schedule is out! Head over to our conference page to preview the schedule and register for the conference.

TECH TALK!

Next Tech Talk on August 5th: Join us for our monthly tech talk on Tuesday, August 5th from 8:00-9:00 PM EST (GMT-5). Join an informal conversation about events, issues, articles, problems, and whatever else is on your mind. This will be a great place to share your highlights from the conference! Register here or visit our events page.

BOOK CLUBS!

July Book Club: We’re reading The AI Con, by Emily Bender and Alex Hanna. Join us for a conversation on Tuesday, July 22 at 8:00 PM Eastern Time, led by Charles Logan. Register on our events page here!

Conference Book Club: We’re also reading Empire of AI, by Karen Hao, and we’ll be holding our book club conversation at the end of our first day of our conference: Thursday, July 31 at 3:00 PM Eastern Time. See more on our events page.

By Autumm Caines

The following is a kind of transcript/reflection of a keynote that I was honored to deliver on May 29th to the SABER East Conference at the Rochester Institute of Technology about Open Education and Artificial Intelligence. The audience here were largely higher ed faculty and staff.

As I continue to read, think, and have conversations around these matters, things shift and change, in my mind but also in the world around me. So, this is not the exact text of the keynote but a kind of transcript/reflection merging what I knew then and what I know now, just a few months later. 

Open Education

So, what do we even mean by "Open Education"? Anyone can use the term "open" after all. Even ChatGPT's parent company is "Open" AI. It is my understanding that the company’s current use of that term in their name has, at best, come to hinge on "no-cost access" to their ChatGPT product. But no-cost access is a problematic way to define Open - especially as it applies to education. 

So, again, what do we even mean by "Open Education"? 

Open Education always comes back to ideas around facilitating education for all. Cronin and MacLaren do a good job of showing that the term has been used since the 1960's/70's and that consensus seems to be that the term does not have a single definition and that there are several other terms in its orbit. 

Similar to how OpenAI claims to be Open because they do not charge for ChatGPT, Open Educational Resources or OER are often thought of as just being free textbooks, but this is shortsighted. OER is defined not just by no-cost access but by the ability to legally share that content and to adapt it to meet your needs. This is made possible through open licenses like Creative Commons, which allow authors/creators to give legal permissions to adapt and share.

The term OER comes out of UNESCO’s 2002 Forum on Open Courseware in Higher Education for Developing Countries that was held in Paris with support by the Hewlett Foundation and WCET. This was after MIT's big push to create Open Courseware and MIT was a large presence at the forum.

UNESCO of course had skin in this game because of the UN's Article 26 of the Universal Declaration of Human Rights, which states that everyone has a right to an education. 

So, this is not just the freedom to remix educational content but education in the pursuit of the agency and liberty of all learners. And when we take this on as a thing to work towards it makes sense to look at those for whom education is often not an option or an expectation.  

Other terms in the Open Education space include Open Educational Practices (OEP) and Open Pedagogy (OP). In my keynote, I unpacked these with some depth, but I am not going to here as I have written on this elsewhere. But I did want to point out that examples of open education often describe an approach to teaching beyond just conveying content for memorization and regurgitation. Often those who use these methods strive to inspire students to claim and embody their education and contribute to the world beyond the classroom in a way that is meaningful to them.

Throwing Open Around

It was that part about students contributing to the world beyond the classroom where technology ended up being particularly useful but also quite problematic. 

Many of the examples of Open Pedagogy and Open Educational Practices have involved technology. For example, co-editing Wikipedia with students, blogging, or using social media platforms for academic ends. The no-cost web 2.0 technologies that were all the rage around 2010, especially social media, played a specific role in enabling a teacher to collaborate with students and allowing those students to communicate with a broader audience (as long as everyone had the supporting technology of course). In addition, many scholars created online presences to disseminate their research or just to think out loud with others about their teaching or other academic interests. 

These tools enabled a scale of networking like we had not seen before and because there was no cost to them they were really easy to access and use. The term “open” resonated, though it was not being used with much depth. 

In this use of the term there was nothing that required ideas of freedom and learner agency to be part of that tool use. Though the instructors may have had the best of intentions, it turned out many of the companies providing the tools were harvesting data for purposes that were more about profits than about leaner freedoms. Using these tools may have felt like openness with easy access and abundant connections but they were never really free. No one wanted to talk about the valuable data that was handed over to tech companies who would go on to sell it to data brokers or use it themselves to target advertising and create filter bubbles. 

In 2017 Chris Gilliard wrote in the EDUCAUSE Review:

"Web2.0—the web of platforms, personalization, clickbait, and filter bubbles—is the only web most students know. That web exists by extracting individuals' data through persistent surveillance, data mining, tracking, and browser fingerprinting and then seeking new and "innovative" ways to monetize that data."

And so now here we are in 2025 with a new "revolutionary technology" disrupting education.

Higher Ed Impacts of Artificial Intelligence

Clay Shirky is Vice Provost for AI and Technology in Education at New York University and he wrote this piece in the Chronicle recently that I can't seem to shake titled “Is AI Enhancing Education or Replacing it?”

He talks about how over the last few years he has seen a bunch of emotions from faculty about AI. These ranged from excitement to anger, but recently he is seeing something he hasn't seen before - sadness. He writes:

"This came from faculty who were, by their account, adopting the strategies my colleagues and I have recommended: ...emphasizing the connection between effort and learning, responding to AI-generated work by offering a second chance rather than simply grading down, and so on. ...Those faculty were telling us our recommended strategies were not working as well as we’d hoped, and they were saying it with real distress."

He also quotes some things he’s heard from students that are really worrying. Students stating that they have become lazy and that they feel like AI is causing them to lose their ability to think critically and be creative. 

He ends this article with a true story about a Tennessee high school student who sued the local school system because he graduated with a 3.4 GPA but was unable to read or even spell his own name. 

Higher education is currently all over the map on how to deal with impacts of this technology. Everything from embracing it to surveillance and harsh punishments for using it. 

Open Education and AI

Where are we at in thinking about Open Education and AI? As with responses from other areas, they are mixed. There are those trying to use AI in the pursuit of Openness, study it in the name of Openness, mitigate it’s harms with Open protocols, but there are also those making a strong case for refusal. 

Last year MIT did a global Call for papers looking for articles that were at the intersection of Open Education and AI. This past January they published 9 of them and I wanted to highlight a few. 

One of the problems with OER is that it is often written by those in the Global North. So while there are these great intentions about educating all, the reality is that the materials are often written in English with metaphors and images that mostly resonate with a Western audience. Aigerim Shilibekova shares this project from Kazakhstan which used LLMs and image generators to adapt existing OER materials around Universal Design for Learning (UDL), originally created by CAST in the U.S., to localize the content for Kazakh educators. AI tools sped up the process of creation, but decisions about language, culture, and pedagogy remained in the hands of local educators. 

And how is not just AI itself but AI literacy impacting those who work in Open Education? Angela Gunder, Joshua Herron, Nicole Weber, Colette Chelf, and Sherry Birdwell give us this study with interviews and surveys of 34 educators who have experience working in Open Education across five continents. It reports on how they use AI and what they think of it. Participants emphasized AI literacies rooted in ethics, pedagogy, and cultural context. While many saw opportunities in AI’s multilingual support and collaborative potential, they also raised critical risks: the erosion of attribution, misuse of openly licensed content for training proprietary models, environmental harms, and the potential for AI to replicate and amplify bias.

The environmental impacts of AI are very real and measuring AI’s impacts is anything but clear cut (though this in-depth investigation seems to be a good start). Some will debate what exactly to measure and how to measure it but I don’t think anyone can disagree that AI uses more energy than simpler technologies and so looking at waste is an important step. Royce Kimmons, George Veletsianos, and Torrey Trust authored this paper which advocates for a judicious use of AI and starts with a simple but overlooked problem: redundancy. If 1,000 students each use generative AI to define "osmosis” we’ve burned 1,000 times the energy for everyone to essentially do the same thing. And most of those outputs are never seen again. The authors argue for judicious AI use: use AI once to generate learning supports like summaries, glossaries, translations, accessibility features and then store and serve those as part of an OER. 

Refusal 

I don’t want to paint Open Education as some kind of perfect frame that we should all try to emulate all of the time. Besides perpetuating the views of the Global North (which Shilibekova is trying to address), there are real labor issues with adjunct and precarious knowledge workers often being the ones who are often the ones asked to create openly licensed materials for free or for one time payments. And again, those materials are often introductory materials taught in introductory classes which tend to be large classes again taught by part-time faculty. 

There is value in the idea of education for all and pursuits in that direction but we need to tread carefully in doing so. The “no-cost” version of Openness is at the heart of many of the problems with the concept and it has not gone anywhere. We are primed to make the same mistakes with AI that we made with Web 2.0 if we are not careful. 

Helen Beethem shares a view of Open Education and AI that goes back to Article 26 of The Declaration of Human Rights. It is a critical look at how AI is undermining the right to an education. Beethem not only looks at potential harms but actual harms including AI being used to collect data on and make unfair decisions about students, AI being trained on cultural artifacts that are biased, replacing pedagogy with AI agents, and reinforcing outcomes around the automation of intellectual work. 

I find Beethem's work here to be of particular significance because not only does she call out the harms but she imagines how education leaders might respond. It’s worth reading the whole post, where she introduces a framework of which refusal is a big part. She frames refusal as a kind of skill that needs to be nurtured, writing: 

“Far from being signs of ignorance, maintaining spaces of non-mediated dialogue and cultural expression are rapidly becoming signs of technical and epistemic skill. Educators can foster these skills, confident that their learners will not be ‘missing out’ on AI because AI will always be making itself more useable: indeed, it is already compulsively useable, and it not use but non-use that needs to be actively developed."

Sometimes when I get a chance to talk about refusal someone will ask about how they can refuse something that is part of their job, or if they will get in trouble if they practice refusal. It makes me sad that refusal often gets a bad rap and that it seems like this scary thing. We need those who use technology in skillful and meaningful ways - saying no sometimes has to be part of that. We all have different parts to play and some will be able to refuse things that others may have to quietly abide till they find alternatives, but I do think figuring out where the lines are for you only happens when you engage with these ideas in community. 

So, this blog post has perfect timing to end the same way that I ended the keynote. July 31st and August 1st is the Civics of Technology Conference with the theme of Communal Resistance to Artificial Systems, keynote talks from Audrey Watters and Chris Gilliard, as well as a ton of breakout sessions. The conference is free and so if you are looking to engage with critical perspectives around AI and other technologies in education this is the place to be. 

Next
Next

Who Sounds the Alarm?: A Preview of the 4th Annual Civics of Technology Conference