What are the unintended or unexpected changes caused by Generative AI?

CoT Announcements

  1. Next “Talking Tech” Monthly Meeting is on Tuesday, 06/06: We hold a “Talking Tech” event on the first Tuesday of every month from 8-9pm EST/7-8pm CST/6-7pm MST/5-6pm PST. These Zoom meetings include discussions of current events, new books or articles, and more. Participants can bring topics, articles, or ideas to discuss. Register on our Events page or click here. We previously indicated the June meeting was on 06/05. We apologize for that mistake.

  2. Next Book Club on 07/27/23: We are reading Meredith Broussard’s 2023 book, More than a Glitch: Confronting Race, Gender, and Ability Bias in Tech. We will meet to discuss the book at 8pm EDT on this Thursday, July27th, 2023. You can register to join on our Events page.

  3. 2023 Annual Conference: The 2nd Annual Civics of Technology annual conference will be held online from 10-3pm EST on both August 3rd and 4th, 2023! You can learn more, register for the conference, or submit a proposal on the 2023 conference page.

by Jacob Pleasants

Addressing this critical question about technology (#4 for those keeping track) requires us to stretch our thinking and our imaginations. By their very nature, unintended and unexpected changes are hard to detect and even harder to predict. This is all the more true for technologies such as Generative AI that have a very broad set of potential uses. How do we even begin to think about the wide-ranging consequences - and especially harms - that Generative AI might bring?

This is, of course, a much-discussed question, as evidenced by some recent CoT blog posts. Today, though, I want to point out a recent and valuable entry into that conversation: the report written by the Electronic Privacy Information Center (EPIC) entitled “Generating Harms: Generative AI’s Impact & Paths Forward” (which can be found here). Unlike discourse that foregrounds existential risks of future AI development, the report details issues and harms that are far more proximal (that either currently exist or can easily arise in the short term). It considers a wide range of harms, spanning the use of AI for extortion, issues around user privacy, environmental costs of the technological infrastructure, and more. The report is not comprehensive, but provides a digestible and accessible introduction to these complex concerns.

I’ll encourage folks to check it out for themselves, and instead of recounting any specific issues that are highlighted in the report, I want to highlight a particular conceptual choice that the authors made. In their notes on how the report was written, the authors state that it “draws on two taxonomies of A.I. harms to guide our analysis:”

  1. Danielle Citron’s and Daniel Solove’s Typology of Privacy Harms, comprising physical, economic, reputational, psychological, autonomy, discrimination, and relationship harms; and

  2. Joy Buolamwini’s Taxonomy of Algorithmic Harms, comprising loss of opportunity, economic loss, and social stigmatization, including loss of liberty, increased surveillance, stereotype reinforcement, and other dignitary harms.

The report illustrates how taxonomies/typologies such as these can greatly facilitate inquiry into complex technologies. A good typology can direct attention toward dimensions and possibilities that might have otherwise been overlooked, which is exactly what is needed when examining the unanticipated and unintended effects of a technology. In the case of generative AI, the EPIC report examines multiple domains that are often under-examined: harms related to relationships and autonomy as well as losses of liberty and dignity. 

The report can help us engage in richer conversations about generative AI, which is of course valuable. But more broadly, the approach that the report takes is one that could likely be fruitful for a wide range of technologies. Although originally created to enumerate harms for privacy (in the case of Citron and Solove) and harms by algorithms (in the case of Buolamwini), these frameworks could very well help us think through many technological systems. 

I am now wondering how these typologies might be used in educational settings to assist learners in engaging with this critical question. How might typologies help students identify unanticipated and unintended consequences that they might have otherwise missed?

Previous
Previous

Challenging Western and Global North Paradigms in Ed Tech

Next
Next

Charting the Course: Incorporating AI Into Assignments to Foster Self-Regulation