The Kapor Center’s Responsible AI and Tech Justice: A Guide for K-12 Education

by Marie K. Heath

Announcements

  1. Next Monthly Tech Talk on Tuesday, 02/06/24. Join our monthly tech talks to discuss current events, articles, books, podcast, or whatever we choose related to technology and education. There is no agenda or schedule. Our next Tech Talk will be on Tuesday, February 6th, 2023 at 8-9pm EST/7-8pm CST/6-7pm MST/5-6pm PST. Learn more on our Events page and register to participate.

  2. Spring Book Clubs Announcement!: We will hold three book clubs in spring 2024, including two of the books which most influenced our Civics of Tech project, and a new book sandwiched in between them. We often talk about how Neil Postman’s work influenced our ecological perspective and Ruha Benjamin’s work has influenced our critical perspective. Yet, we’ve never held book clubs to discuss either. We’re excited to return to those two classics and also dive into Joy Buolamwini’s highly anticipated new book. You can find all our book clubs on our Events page.

    1. Register to join us on February 15th as we discuss Neil Postman’s classic, Technopoly: The Surrender of Culture to Technology;

    2. Register to join us on March 21st as we discuss Joy Buolamwini’s new book, Unmasking AI: My Mission to Protect What is Human in a World of Machines.

    3. Register to join us on April 25th as we discuss Ruha Benjamin’s instant classic, Race After Technology: Abolitionist Tools for the New Jim Code.

Dan and I are excited to share the Kapor Center’s Responsible AI and Tech Justice: A Guide for K-12 Education (you can download the full guide or an executive summary), which is part of the Kapor Center’s larger initiative on justice-centered computing. We were excited and honored that Dr. Shana White invited us into this work to play a role in shaping the guide, along with several other folks who have participated in our Civics of Technology community, including Drs. Aman Yadav, Sepehr Vakil, Jean Ryoo, and Jane Margolis, as well as with scholars who significantly shape our scholarship and practice at CoT, including Drs. Safiya Noble, Emily Bender, and Chris Gilliard.

The guide opens with an overview of the possibilities and current perils of AI, and then makes the case for the need for education across ages and disciplines to engage in the work of tech justice, arguing:

At a time when there is so much promise juxtaposed with growing fear of the powerful breakthroughs in AI and its increasing impact on our everyday lives, the comprehensive understanding and critical interrogation of AI technologies must no longer be limited to a select few. We must seize upon this moment to improve equity and justice in AI by preparing a diverse and robust AI workforce; providing reskilling opportunities to effectively incorporate AI into existing jobs and roles; conducting algorithmic audits and identifying biases in algorithms; investing in solutions aiming to utilize AI for positive social impact; and implementing regulation and accountability measures to mitigate risk and harm. To achieve this, we must begin by making fundamental changes in K-12 education to ensure all students and teachers are equipped with the knowledge, skills, and resources to become critical consumers and ethical producers of the next generation of technologies. (p. 4).

In particular, the guide argues that computing education needs to center critical interrogation of AI’s impact. The authors also urge this work be taken up across educational spaces.

…the critical examination of the technologies that are ubiquitous in students’ and educators’ lives, impact social and democratic structures, and have disproportionate impacts on marginalized communities must not be a peripheral part of computing education. Instead, we argue that the critical interrogation of AI’s development and impact must be a core component of K-12 computing education--and of education more broadly-- and we must intentionally center racial and social justice in the examination of these technologies. (p. 5).

The Kapor Center’s dedication to racial equity and justice anchors the guide, which explicitly calls for a racial and social justice lens to be employed when engaging in the work of ethical AI and justice centered technology education. One of the many reasons Dan and I were honored to be part of this work is because of the Kapor Center’s unambiguous commitments to naming injustice and working toward equity.

The guide is organized around six core components, pictured below.

The Six Core Components of AI Guide. 1. Examine the AI technology creation ecosystem. 2. Interrogate teh complex relationship between technology and human beings. 3. Explore the impacts and implications of AI technologies on society. 4. Interrogate p

Each component takes an inquiry approach to considering its topic, listing questions for teachers, students, and policy makers to reflect upon and wrestle with issues around AI and just computing in schools and society. For instance, the questions which accompany Component 1: Examine the AI Technology Creation Ecosystem include:

  • Who is involved in the ideation stage, research, and design phase of AI technology creation?

  • Why did the individual/group produce this piece of technology or AI tool?

  • What demographic trends exist among AI technology company boards, leadership, and technical workforce?

  • What are the backgrounds, cultures, and values of AI company boards and leadership teams?

  • How do the identities/backgrounds of technology producers impact algorithmic thinking?

  • Who invests in AI technology tools and who benefits (financially) from their creation?

  • What is Big Tech (Google, Microsoft, Amazon, etc.), who are the key players in AI (OpenAI), and what power does Big Tech hold in the past, current, and future of AI technologies?

  • What are techno-optimists, what do they believe, and what do their critics believe?

  • How are AI products currently regulated?

  • Why is there so little regulation and accountability of AI technologies?

  • How are algorithms used across social media platforms?

  • How are AI technologies used for surveillance, policing, and international conflicts?

Finally, the guide concludes with an appendix of resources and lesson plans, some of which you may recognize from our Civics of Technology site! We hope that you will review the full guide or the executive summary, and please share your thoughts here in the comments!

Previous
Previous

Why We All Need to Pay More Attention to Infrastructure: A Review of How Infrastructure Works

Next
Next

Critique Needs Community: On a Humanities Approach to a Civics of Technology