Automation and AI in Education: Who is Reading the Room?

CoT Announcements

  1. Book Club this Thursday, 09/21/23: We are discussing Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy by Cathy O’Neil this Thursday, September 21, 2023 @ 8:00 p.m. EDT. Register for this book club event if you’d like to participate.

  2. Next Monthly Tech Talk on Tuesday, 10/03/23. Join our monthly tech talks to discuss current events, articles, books, podcast, or whatever we choose related to technology and education. There is no agenda or schedule. Our next Tech Talk will be on Tuesday, October 3rd, 2023 at 8-9pm EST/7-8pm CST/6-7pm MST/5-6pm PST. Learn more on our Events page and register to participate.

  3. Critical Tech Study: If you self-identify as critical of technology, please consider participating in our study. We are seeking participants who self-identify as holding critical views toward technology to share their stories by answering the following questions: To you, what does it mean to take a critical perspective toward technology? How have you come to take on that critical perspective? Please consider participating in our study via OUR SURVEY. You are welcome to share with others. Thank you!

by Charles Falajiki

I recently have been noticing to a discussion between two different communities of people: academics and activists. One of the issues that emerged from the discussion was the need for these two communities to encourage cooperation in order to address the pressing issues like climate change and  that the world is currently facing. However, I feel the discussion still has an unaddressed question: Who should set the collaborative agenda? Some of the scholars argue that activists are sometimes unreasonable in their reasoning and that their causes are sometimes utilised as political props. On the other hand, some activists posit that academia is just concerned with publishing papers and getting promoted, even if the findings and recommendations from their work are only valuable for citations in other studies and have no real-world influence. But this is not the focus of this discussion. Rather, as the mobilisation debate of Artificial Intelligence in Education (AIED) continues, it is vital to pay attention to who is setting the agenda for AIED —who are the actors endorsing or forewarning of a looming danger, and what are their motivations and incentives?

As a social intrapreneur and digital education researcher, I have spent the last 5 years working on projects at the intersection of technology, education, and policy. During these years, I have observed several debates about the use of technology in education and after working as a UNESCO Youth Researcher in 2021, I picked a unique interest in critical digital education research which has now led to my graduate field of study at the University of Oxford where I will be exploring the role of technology in learning and education with a specific focus on inequality and social justice. With the growing assertion and fears about the use of emerging technologies such as AI in education, I think it is crucial for us to take a critical yet objective look on the mobilization of these technologies in education, and as a research community, we must build on existing body of knowledge to create a path towards their responsible adoption.   

From cited opportunity of ChatGPT - OpenAI’s suite of GPT (GenerativePre-trained Transformer) - to enhance lessons, tutor student, grade assessment and papers, and provide additional support to students, to the modelling of a map of generative AI in education, the application AIED is rapidly gaining momentum. For context, it took ChatGPT just two months after launching to reach 100 million users, a mark that Tiktok, WhatsApp and world web did not reach until after 9 months, 3.5 years and 7 years respectively. However, with its recent boom, AIED is also currently experiencing a period of heightened scrutiny. Just like a two-sided coin, AI products are now being taken up at scale throughout mainstream schooling and higher education on one side and on the other side, we are also witnessing growing pushback against the presence of AI technologies in education. My goal in this article is not to be for or against this debate, but rather to move the needle and position us to ask a more important question of who is reading the room when it comes to the use of automation and AI in education? Building on Neil Selwyn characterisation of the AIED community (Fig. 1), this report argues that the “responsibilization” of the ethical design and deployment of AI on its developers or those that profit from it which has been seen in some cases is undemocratic and neglects the sufferings of those that may be impacted by the (un)intended negative effect of these technologies. As argued by Paul Nemitz, the  development and promotion of AI technologies is  already dominated by the megacorporations (Corporate proponents of AIED), and their dependent ecosystems, extremely profiting from its rise, leading to jumps in stock market valuations, and therefore wielding economic power. The awareness of this centralization of power and dominance is what Kate Crawford argued in her book - Atlas of AI;

Rather, artificial intelligence is both embodied and material, made from natural resources, fuel, human labour, infrastructures, logistics, histories, and classifi- cations. Al systems are not autonomous, rational, or able to discern anything without extensive, computationally intensive training with large datasets or predefined rules and rewards. In fact, artificial intelligence as we know it depends entirely on a much wider set of political and social structures. And due to the capital required to build AI at scale and the ways of seeing that it optimises AI systems are ultimately designed to serve existing dominant interests. In this sense, artificial intelligence is a registry of power.

Consequently, my report argues that the discussion about what, where and how AI should be used within educational contexts must create a purview for the new voices, particularly those that will be most impacted by the (mis)use of AIED. I propose a new lens in the discussion about AIED, one that combines a broad range of actors, field of studies, defined ethical principles and an equilibrium approach to understanding the ecological environment that shapes technomoral principles that underpins the mobilisation of AIED. A common task for the AIED community would be to synergistically support the ecosystem in developing a deeper understanding of when, by whom, how and for what reasons this new technology should and should not be utilised. This perhaps would be the savour that is missing in the design and use of AIED. While I do not posit to have all the answers, I provided some useful analysis in this report and my goal is for us, as a research community, to begin to have this conversation and this is why I encourage you to read the full article by following the link below.

Previous
Previous

Iggy Peck, Architect Is an AI Doomer and Other Things I Struggle to Talk with My Kids About

Next
Next

Here are “101 Creative Uses of AI in Education.” Are They Truly Creative?