Muppets and Machines: Seeing the Puppeteers Behind AI
Lots Happening at Civics of Tech!
CONFERENCE!
Our fourth virtual conference will be on July 31 and August 1! Head over to our conference page to learn about our amazing keynote speakers, register for the conference, and propose a session.
TECH TALK!
Next Tech Talk on June 3rd: Join us for our monthly tech talk on Tuesday, June 3 from 8:00-9:00 PM EST (GMT-5). Join an informal conversation about events, issues, articles, problems, and whatever else is on your mind. Use this link to register.
BOOK CLUBS!
Next Book Club: We’re reading Mind Amplifier, by Howard Rheingold. Join us for a conversation on Tuesday, June 17 at 8:00 PM Eastern Time, led by Tiffany Petricini. Register on our events page here!
July Book Club: We’re also reading The AI Con, by Emily Bender and Alex Hanna. Join us for a conversation on Tuesday, July 22 at 8:00 PM Eastern Time, led by Charles Logan. Register on our events page here!
Bonus Book Club: We’re also reading Empire of AI, by Karen Hao, and we’ll be holding our book club conversation at the end of our first day of our conference: Thursday, July 31 at 3:00 PM Eastern Time. See more on our events page.
+NEW REVIEW: Of Adam Becker’s More Everything Forever
By Katie Blocksidge and Hanna Primeau
Have you seen how humans interact with Muppets? If you have not been blessed to experience how Michael Caine interacted with Kermit in the Muppet Christmas Carol or how Tim Curry and Miss Piggy bantered in Muppet Treasure Island, we recommend it as some light watching in these trying times. Back in April, we had a conversation about the Muppets over Instagram; we were discussing an entertaining post on how people who had the luck to interview one of the Muppets frequently made the mistake of directing all their questions to the Muppet instead of the performer answering the questions.
This anecdote about the Muppets may be apocryphal, but there are other instances of people responding to the Muppets as if they are real. They exist as part of the world, and we respond to them in the same way. Hell, we experience emotional responses to art, music, theater, movies, and books all of time, it is one of the reasons we seek them out.
Sure, you’re probably thinking, but what does this have to do with technology?
In February 2025, we interviewed 16 first year college students (interviews are deidentified, with aliases chosen by the interviewee) to learn more about their use of generative AI in their personal and academic lives, as well as their evaluation process and trust level for information generated by AI. Our goal was to find out if and how they used generative AI as part of their information landscapes.
One of our questions asked interviewees to describe how they evaluated an image for authenticity, which included determining if an image had been photoshopped or created by generative AI. Their responses were illuminating with some, like Jacob, focusing on content that was visually replicating human likeness.
“I feel like it's pretty noticeable, like on social media, if you see something AI generated like a picture, there will be defects in it. If it's humans, it's going to be like fingers or just something about them. Looks off. Uncanny.” - Jacob
He also uses our favorite term that came up repeatedly in our interviews: uncanny. This term is likely referring to the uncanny valley effect, in which people experience a negative emotional reaction to something that isn’t human but is attempting to resemble a human.
Others focused on the minutiae, which included looking at the laws of physics, much like Amy.
“When it's images, since I do a lot of crafts, it's obvious. Like for example, I do crochet. You can tell by the way it [Gen AI] will often produce it as if it's like a piece of fabric instead of it. So it's a, it's an understanding of how things exist, like free flowing in the world that make it obvious.” - Amy
Amy uses her own knowledge about fiber arts to consider the nature of fabric, and what it functionally can or can’t do. She has additional understanding beyond what an AI can have of the raw materials involved to assist her in determining image authenticity.
And of course, there is the reference to extra fingers:
“By stuff like, the six fingers” - John
There is an obvious joke here about The Princess Bride, but a six-fingered man does give one pause. However, we should also pay attention to the phasing Amy uses as she explains how she knows a fabric or pattern is produced by AI “its an understanding of how things exist, like free flowing in the world”. How does generative AI exist in the world?
A bit like a Muppet, in that we pay attention to the character and not the person or people who animate it. When interacting with characters from Sesame Street, performer Carmen Osbahr says their human guests will often address the character rather than the puppeteer, even off-camera. “Most everybody who visits us talks to the character like they’re alive...the moment we bring a character down [to rest], we have a conversation, but it’s great to have a relationship with a character and a celebrity. They’ll talk to Elmo, Rosita, Cookie Monster, and we’re talking to them right back”. Guests see the character as a distinct individual but are still able to recognize the role of the performer as an active participant in bringing the character to life.
Unlike a Muppet, Generative AI doesn’t exist in the physical world the same way: it can complete digital tasks and answer questions, but it can’t pour a friend a glass of water or give a hug, or count. More concerningly, Gen AI doesn’t have the same transparency; while we can easily discover Cookie Monster’s motives for most of his actions (COOOKIES), as users we can’t immediately see how gen AI is created or the data it pulls on for its response.
This lack of transparency informs how students interact with GenAI; while only three of the sixteen students we interviewed reported having read the terms & conditions for any GenAI tool, they were not unaware. Danny shared with us that “A lot of times it's just scraping information off of other sources. And so, if that source has limitations, so will the AI”. Other students also described their uncertainty with AI generated information; power user Cameron shared “I assume they're probably putting everything I give to the AI as a prompt for training data sets”. At the same time, he tells us “I would say I don't trust it at all. I see AI as trying to look correct, but it doesn't actually know what it's doing”. While students are skeptical about how Gen AI retrieves and uses data, they haven’t stopped using it; how are they navigating this disconnect?
A certain level of metacognition, thinking about thinking, has to occur to utilize Gen AI tools effectively. Our use must occur in a way that acknowledges the materials AI has been trained with, the inevitable biases within that material, the algorithm than has shaped the way it interacts with both information and the user, and if an individual's use case is even appropriate for an Gen AI tool.
As social media becomes filled with people saying “I asked AI and it said…”, Googled answers, or even ones dredged up from memory are becoming a thing of the past. From our interviews, Nufi shared with us a glimpse of why this is happening, “I feel like it's like... a new Google because you put a question in, you have to go through all these links and like, let's say like ChatGPT, you put your question in and it'll give you it [an answer] right away, than clicking links and reading about it and stuff like that”. If asking Google takes more work, Gen AI becomes the easy answer.
If Gen AI takes the burden of thinking away, how do we encourage metacognition? The average first year college student is continuing to do the hard work of learning new metacognitive behaviors, making the work of thinking take both more time and more energy than that of an expert, during a time in their life where they are likely short on both. Add in the discomfort of not knowing something, and not yet having honed the expert skills necessary to skim and evaluate resources found on the web, how do we help them find value in doing some of the challenging metacognitive work themselves when AI is easier and more comfortable to use?
As Gen AI advances it will get better at having the “right” answers and will respond in ways that feel even more natural than they do now. That doesn’t change our wariness about Gen AI, the metacognitive skills we use to engage with it, and how we identify it being used in our information environments.
Where do we go from here? We don’t think there is a single answer to this question, but we argue there is a shared responsibility. The responsibility of navigating this changing information environment belongs to all of us, and not just at the individual level of the user; it lies with the companies releasing Gen AI tools, creators of new educational initiatives for K-12, policy makers, educators, government officials, and each of us as citizens. While we may want to treat Gen AI as a Muppet with thoughts and feelings, we can’t ignore the performer who is creating that character; nor can we ignore the vast amounts of black-boxed data, algorithms and people who program and limit Gen AI. They require our attention if we want to responsibly use a seemingly pervasive tool.
Katie is the Head of Research & Education Services at the Health Sciences Library at The Ohio State University; her favorite Muppets are Beaker and Kermit. Hanna Primeau is the Instructional Designer at The Ohio State University Libraries and her favorite Muppets are the Swedish Chef and Beaker. Please visit their Linktree to see their other scholarly work.