ChatGPT and Good Intentions in Higher Ed

Civics of Tech announcements:

  1. First “Talking Tech” Monthly Meeting this Tuesday!: We are launching a new monthly event called “Talking Tech” on the first Tuesday of every month from 8-9pm EST/7-8pm CST/6-7pm MST/5-6pm PST. These Zoom meetings will include discussions of current events, new books or articles, and more. Participants can bring topics, articles, or ideas to discuss. Our first meeting is this Tuesday on February 7th. Register on our Events page or click here.

  2. Book Club Reminder: Our next book club will discuss the 2020 book Data Feminism by Catherine D'Ignazio and Lauren F. Klein on Thursday, March 16th, 2023 from 8-9:30pm EST/7-8:30pm CST/6-7:30pm MST/5-6:30pm PST. The book is available in multiple formats, including audiobook, and a paperback version was just released! Register on our Events page or click here.

This blog was originally posted on December 29th, 2022 on Autumm Caines’ blog: Is a Liminal Space. It is cross-posted here with permission. Visit Autumm’s site for more thoughtful posts.

Image by Yvonne Huijbens from Pixabay

by Autumm Caines

I’m frustrated by the conversation around ChatGPT in higher education.

So far, the conversation has been largely about using the tool as a text generator and fears around how students can use it for “cheating”. I tend to think this is only the tip of the iceberg and it frustrates me – this convo is still very young so maybe I just need to give it a chance to develop. I think the more interesting (and likely disruptive) conversation is around how the tool can be used for meaning making (and legal issues around intellectual property). Maybe I’m overreacting, though maybe I’m not

But meaning making is not the topic of the day! No, the topic of the day is “cheating” and everyone is officially freaking out!

Just in the last few days there have been claims of “abject terror” by a professor who was able to “catch” a student for “cheating” with ChatGPT (resulting in the student failing the entire course). Calls to return to handwritten, in-person essay writing and over 400 comments (at the time of this writing on Dec 29th) almost entirely focused on fears around “cheating” in an article about the tool’s impacts in higher ed

Besides the calls for surveillance and policing, the humanized approaches being proposed include talking with students about ChatGPT and updating your syllabus and assignment ideas to include ChatGPT. But often these ideas include getting students to use it; helping them to see where it can be useful and where it falls down. This is a go to approach for the humanistic pedagogue and don’t get me wrong I think it is head and shoulders above the cop shit approach. Yet there are some parts about this that I struggle with.

I am skeptical of the tech inevitability standpoint that ChatGPT is here and we just have to live with it. The all out rejection of this tech is appealing to me as it seems tied to dark ideologies and does seem different, perhaps more dangerous, than stuff that has come before. I’m just not sure how to go about that all out rejection. I don’t think trying to hide ChatGPT from students is going to get us very far and I’ve already expressed my distaste for cop shit. In terms of practice, the rocks and the hard places are piling up on me.

Anyway, two good intention issues around working with ChatGPT and students are giving me pause:

It is a data grab

Many (though not all) of the ideas I’ve heard/seen for assignments that use ChatGPT require students to use ChatGPT which requires an OpenAI account. An OpenAI account requires identifiable information like an email address or google account which means that it can be tracked. Their privacy policy is pretty clear that they can use this info how they want and that includes third party sharing and data possibly being visible to “other users” in a way that seems particularly broad.

I have this same issue with any technology that does not have a legal agreement with the university (and I don’t necessarily even trust those who do). But I’ve also long believed that the university is in a futile battle if we really think that we can stop students or professors from using things that are outside of university contracts. 

Some mitigation ideas for the data grab

Note: All of my mitigation ideas I’m sure are flawed. I’m just throwing out ideas, so feel free to call out my errors and to contribute your own ideas in your own blog post or in the comments below. 

Don’t ask students to sign up for their own accounts and definitely don’t force them to. There is always the option of the professor using their account to demo things for students and other creative design approaches could be used to expose students to the tool without having them sign up for accounts.

If students want their own accounts maybe specifically warn them about some of the issues and encourage them to use a burner email address but only if they choose to sign up.

I’m not sure if it is breaking some kind of user policy somewhere to have a shared password on one account for the whole class to use. This could get the account taken down but I wonder how far you could take this. 

It is uncompensated student and faculty labor potentially working toward job loss

How do humans learn? Well that is a complex question that we don’t actually have agreement on but if you will allow me to simplify one aspect of it – We make mistakes, realize those mistakes (often in collaboration with other humans – some of whom are nice about it and others not so much) and then (this part is key) we correct those mistakes. Machine learning is not that different from this kind of human learning but it gets more opportunities to get things wrong and it can go through that iterative process faster. Oh and it doesn’t care about niceness. 

Note: I cannot even try to position myself as some kind of expert on large language models, AI, or machine learning. I’m just someone who has worked in human learning for over 15yrs and who has some idea about how computational stuff works. I’ve also watched a few cartoons and I’ve chatted with ChatGPT about machine learning terms and concepts*

But even with all of its iterations, it seems to me that human feedback is key to its training and that the kind of assignments that we would ask students to take part in using ChatGPT are exactly the kind of human fine tuning that it (and other tools like it) really need right now to become more factually accurate and to really polish that voice. Machines can go far just on those failing/succeeding loops that they perform themselves but that human interaction [chef’s kiss]. And that should be worth something. 

When I imagine what a finely tuned version of ChatGPT might look like I can’t say it feels very comfortable and I can’t imagine how it does not mean job/income loss in some way or another. Now it could also mean job creation but none of us really have any idea. 

What we do know is that ChatGPT’s underlying tech is GPT-3 and OpenAI plans to drop an upgraded version, GPT-4 in 2023. Asking students to train the thing that might take away opportunities from them down the road seems particularly cannibalistic but I also don’t know how you fight something you don’t understand. 

Some ideas for mitigating the labor problem 

I’m pretty stuck on this one. My go to solution for labor problems is compensation but I don’t know how that works here. I’m thinking that we are all getting ripped off everytime we use ChatGPT. Even if it ends up making our lives better OpenAI is now a for-profit (be it “capped profit”) company and they are going to make a lot here (barring legal issues). But I don’t think that OpenAI is going to start paying us any time soon. I suppose college credit is a kind of compensation but that feels hollow. I do think that students should be aware of the possible labor issues and no one should be forced to use ChatGPT to pass a course. 

I just want to end by saying that we need some guidance, some consensus, some … thing here. I’m not convinced that all uses of ChatGPT are “cheating” and I’m not sure someone should fail an entire course for using it. I mean sure you pop in a prompt get a 3 second response that you copy and paste – I can’t call that learning and maybe you should fail that assignment. But you use it as a high end thesaurus or know your subject and use ChatGPT to bounce ideas off of it and you are able to call out when it is clearly wrong… Personally I’d even go so far as getting a first draft from it as long as you expand on and cite what parts come from the tool. I’m not sure these uses are the same thing as “cheating” and if it is I’ve likely “cheated” in writing this post. I’ve attempted a citation below.

~~~

** Update 1/26/23 after publishing this post some were looking for more mitigation ideas. In response I published Prior to (or instead of) using ChatGPT with Your Students which is a list of classroom assignments focusing privacy, labor, history, and more around ChatGPT and AI more broadly.

*Some ChatGPT was used in the authoring of this blog post though very little of the text is generated by ChatGPT. I chatted with it a bit as part of a larger process of questioning my own understanding machine learning concepts but this also included reading/watching the hyperlinked sources. My interactions with it included questions and responses around “human-in-the-loop” and “zero-shot learning” but I didn’t use these terms in this post because I worried that they may not be accessible to my audience. I do think that I have a better understanding of the concepts because of chatting with ChatGPT – especially with the “now explain that to me like I was a 10yr old” prompt. One bit of text generation is when I asked it to help me find other words/phrases for “spitballing” and I went with “throwing out ideas”. 

Previous
Previous

Philanthrocapitalism in Public Education, Blockchain Cradle-to-Career Surveillance, and Other Fresh New Horrors from the First Ever Tuesday Tech Talk

Next
Next

Toward Research Futures in Year Two