Collectively Asking Technoskeptical Questions About ChatGPT

CoT Announcements

  1. Next Book Club: We are reading  Sasha Costanza-Chock’s 2020 book, Design Justice: Community-Led Practices to Build the Worlds We Need. We will meet to discuss the book at 8pm EDT on Thursday, May 18th, 2023. Register on our Events page.

  2. Next “Talking Tech” Monthly Meeting on 05/02: We hold a “Talking Tech” event on the first Tuesday of every month from 8-9pm EST/7-8pm CST/6-7pm MST/5-6pm PST. These Zoom meetings include discussions of current events, new books or articles, and more. Participants can bring topics, articles, or ideas to discuss. Join our next session on Thursday, May 2nd. Register on our Events page or click here

by Marie K. Heath, Daniel G. Krutka, and the following members of the TACTL SIG who contributed to answering the technoskeptical questions: Sumreen Asim, Yin Hong, Fedrik Moerk Roekenes, Lauren Bagdy, Rick Voithofer, Autumn Caines, Jeff Carpenter, Jennifer Darling-Aduana, Joana Hall, Debra Bernstein, Elvin Fortuna, Ann Devitt, Kara Dawson, Stacy Gilpin, Joan Hughes, Natalie B. Milman, Heather Pearson, Kristen Swenson, and Lauren Weisberg

Earlier in April we wrote a blog post arguing for increased theorization of technology in education technology scholarship. In it, we reviewed the five technoskeptical questions we often use to consider implications of technologies on our individual and collective lives through ecological and critical lenses. We used ideas from the post to help facilitate an interactive conversation on ChatGPT with the Technology as an Agent of Change (TACTL) Special Interest Group (SIG) of the American Educational Research Association (AERA). Last Sunday, we engaged this technoskeptical approach to inquire into the unintended, collateral, and disproportionate effects of ChatGPT in education and society. Today, we’re sharing the collective work and thinking that resulted from the meeting.

The SIG members sorted themselves into groups according to which question they preferred to consider. Then they discussed and offered possible answers to each question. With their permission, we’re sharing their responses.

1. What does society give up for the benefits of ChatGPT?

Folks answered this first by rejecting the tech-centric assumption that efficiency should be our highest value. They noted the ways that (good) struggle, or cognitive disequilibrium, is integral to learning. Searching for a word, or the next word, in a piece of writing helps clarify thought and understanding. It allows us to wrestle with new learning in what Vygotsky termed the zone of proximal development. Dan posited that this allows us to engage in what Albert Borgmann  refers to as “good burdens,” a satisfaction from engaging with struggle. Marie wondered if the infinite possibilities which exist in the space between one word and the next are part of what makes human creation both beautiful and fundamentally different from the mathematical average of machine produced writing.

2. Who is harmed and who benefits from ChatGPT?

This group argued that companies who create large language models will be the most obvious group to benefit from society’s adoption of these models. They also noted that assessments in education might benefit from increased authenticity, encouraging teachers to ask questions that can’t be successfully answered by an algorithm. Participants also identified harms, particularly to privacy and data. Becuase ChatGPT scrapes the internet to produce responses, will the internet just produce the biases—racism, sexism, transphobia—that pervade society? They also considered the ecological impact of the electrical power, storage facilities, and cooling capabilities needed for ChatGPT to process.

3. What does ChatGPT need?

The SIG participants provided detailed answers that included the physical, social, and political architecture which ChatGPT needs to thrive. We won’t try to summarize all of their answers, but detail how Chat GPT needs infrastructure that includes the internet, electrical/processing power (for data farms), environmental resources, human protection of infrastructure (physical and cybersecurity), and labor from programmers, data scrubbers to train data set, content moderators, support staff and sales. It also needs data for constant updating, to know language/structure, to understand patterns of human’s behavior that is tracked and shaped by technology, to use cultural digital artifacts (multilingual, multicultural), and maybe even acquire an understanding of human morality? In short, ChatGPT needs humanity. Oh, and our money and permission. It needs a business model that profits from us, and a lack of legislation to stop it.

4. What are the unintended or unexpected changes caused by ChatGPT?

This group drafted their thoughts as a paragraph, which we are pasting portions of their answers below:

We probably won’t ever know all of the unintended consequences and may not fully realize the consequences given how it is and will continue to infiltrate and present itself in our lives/technologies. In some cases it will be positive and others harmful/negative…. It may increase or reduce workload. There will be a proliferation of mediocre, bad, and harmful apps… It is reimagining collaboration, academic integrity, and more… The output is biased and lacks diverse perspectives because its data are predominantly biased, privileged, Western, white majority cultures, and patriarchal. An unintended consequence is that it IS being used for learning even though it may not have been “designed for learning”. An unexpected change is seasoned by all of this–like the tool of the oppressor. We may think more critically in the future—these skills of criticality are even more important. We may trust the technology when we should not.

Of course, anticipating unintended and unexpected changes is challenging for such a new technology, but this group surfaced possibilities such as affecting the notion of work, reinforcing dominant perspectives at the expense of minoritized groups, or even increasing faith in technology over ourselves.

5. Why is it difficult to imagine our world without machine learning?

We edited this last question to include all of machine learning, since ChatGPT is a fairly new iteration of large language modeling. Interestingly, no one elected to engage with this question. As a whole group, we speculated on the ways we depend on algorithmic recommendations for music and other media. We wondered about the intersection of our identities with algorithms and the ways this may (or may not) have prompted us to make certain purchases or pursue particular promotions.

Overall, our colleagues in TACTL offered thoughtful inquiries and insights for us to consider as educators and scholars as we move forward in a world with ChatGPT. We are committed to acting with intention and toward justice as we engage with machine learning across education, and this collective thinking helped us to move toward that goal. We hope this blog post is as thought-provoking for you as the session was for us.

Previous
Previous

Introducing the Technology Education Iceberg

Next
Next

AERA 2023 Roundup