A Conversation with Nick Seaver, Author of Computing Taste

CoT Announcements

  1. 2023 Annual Conference Submissions due TOMORROW June 26 : The 2nd Annual Civics of Technology annual conference will be held online from 10-3pm EST on both August 3rd and 4th, 2023! You can learn more, register for the conference, or submit a proposal on the 2023 conference page. Submissions for sessions are due on June 26th!

  2. Next Book Club on 07/27/23: We are reading Meredith Broussard’s 2023 book, More than a Glitch: Confronting Race, Gender, and Ability Bias in Tech. We will meet to discuss the book at 8pm EDT on Thursday, July27th, 2023. You can register to join on our Events page.

by Jacob Pleasants

In Computing Taste: Algorithms and the Makers of Music Recommendation, Nick Seaver takes a close look at the people who design the algorithms behind systems like Spotify and Pandora. How do those people think about music? How do they think about people’s musical taste? And why do they make the design decisions that they do?

The book is absolutely worth reading, and to convince you of that, here’s conversation that I had with Nick Seaver. He very graciously agreed to explore some of my curiosities about the themes of the book and its educational applications and implications. Please enjoy!

Jacob Pleasants: One of the things I think about as an educator is how people start to think critically about technology. So, what’s your story of that? How did you get to the point where you embarked on your research project?

Nick Seaver: It's a good question, and it's one of those things that’s so hard to recover. I came to this work on music recommender systems from a longer-term academic interest in music and technology. I had been interested in that ever since I was an undergraduate, and I wrote a senior thesis about audio recording technology and noise music.

There's a literature on music technology about this sort of relationship between the kinds of things we think about as music, the way that music is made, and the kinds of technologies that we use to make music. It is not always critical per se, but I think that that question of the relationship between expression, performance, and technical stuff for me really came out of out of that strong interest in new technology which I thought was neat. And then I got into the conceptual side of that relationship.

Jacob Pleasants: In the book you do bring in critical technology studies scholarship. What was your point of entry into that conceptual space?

Nick Seaver: I feel like my approach is a little bit different from a lot of critical work on these systems, not because I’m not critical, but because the kind of critique that I do is a little bit different. We have a stock critique mode that we do in STS, which is very handy, which is that you find a technology or a science and you say: Hey, you thought that your science or technology didn't have any society in it, but we’re going to show you how that's there, and that that's meaningful, and it has consequences, and so on.

In the case of music recommendation, I find it so fascinating because nobody involved is under the illusion that there’s no society in it. Nobody working on modeling musical taste has this idea that they're doing something that's outside of culture. And so the interesting question for me is: What do they do about it? What’s the consequence of this? What's the result of knowing that culture is involved in your technology? How do they understand culture?

I think a lot of the critique of algorithms out there is premised on the idea that if we could let people know that this is a social process, then something would happen. And it's not always clear that something would. It’s worth getting into the details of what’s going on in this domain where people are really aware that they're working on a social technology. It’s hopefully useful to getting us to think a little bit more deeply about the kinds of critiques that we make.

Jacob Pleasants: One of the fascinating themes in your book was your examination of how there are these people who are working to design these systems, but those systems also shape the way that the designers think about music. That story is a little different than what you’d get from a more typical critique.

Nick Seaver: I can't say I was surprised by it. I might have even been looking for it, because again to go back to the kind of music technology space. The early technologies that turned into the phonograph were often thought of as being models of how human hearing worked. And then once people built those technologies, they would come back and become models of hearing and hearing would become models of phonography. And you have this kind of mutuality in the way that we understand how people work, and how machines work. That's normal in some ways, right? You often find people building a technology like this, and they start to reflect on their own ideas going into the system, and it turns around.

In the prologue of the book, I describe my original plan for looking at these people building a technology. I figured it requires a theory of taste. How am I going to recommend music to people? What am I going to use? What data? I thought I was going to find that, say, Jeff over here has a theory, so here's the technology he built. You can see how the technology informs his theory or his theory informs the technology. I thought that was really what I was in for, and there's a little bit of that. But there was a lot more vagueness. I found a lot of people not really holding very explicit theories and building not very specific technologies, even technologies that are designed to be “open” and capacious. Those are an interesting kind of technology. They're different from a lot of technologies that we've studied in the anthropology of technology and in STS.

They want to try to build a system that's as “open” as possible, a system that can accommodate “taste” no matter what it means. That might require them to collect all sorts of weird data that people might not want to give, that we might not have collected before. I thought that was an interesting moment for me that when I realized that there just wasn't that sort of straightforward thinking and system building that I thought I was going to find.

Jacob Pleasants: There is currently a lot of interest in building algorithms that avoid using explicit theories so that they can be “open,” including generative AI. How have you watched these current developments unfold? How have you taken some of your insights with you?

Nick Seaver: It’s a weird moment for sure, and I’m not sure that I have a grip on about what's going to come next. I do think that a critical lesson here is that any of these systems that are “open” in quotation marks are not as open as they seem. There are always these constraints. And this is in some ways a pretty classic critique of technology. You think that this system lets you do anything you want. You think that the highway system lets you drive anywhere you want to go, but you can only drive to places that are on the highway system. And so it's worth keeping that in mind, and of course it's hard to find the edges of these things, because they're not as obvious in a in a sort of generative system like this. They become apparent as people do things. This is where a lot of the work about bias and AI is extremely valuable, because it's relentlessly pointing out the limitations of these systems. We need that when most of the discourse is about, “we'll be able to do this as a result of that,” and is not really focused on the limitations.

And this is maybe a sort of shift in my own in my own interest and focus. Especially around generative AI, I think a lot of talk ends up focusing on the capacities of these systems, and not enough on the social context into which these systems are delivered. What happens to people? Because there's a whole different set of concerns at play. If your worry is whether the system is going to be good enough to take Job X, that's actually not the relevant question. Really, it’s whether the system is good enough to convince the boss of Job X that they should fire all of the Xs and just use this technology instead. The important part is not so much what it can do, but what people think that it can do, and how people relate to it. And I think that connects to a bunch of questions about how recommender systems work, and how people think about how recommender systems work. But in this much broader scope, because we're in this moment now where people for some reason think that these large language models are going to be able to do every last thing that we want to do.

To be honest, I end up in moments like this sort of retreating into ethnographer mode and saying I’ll just sit out for a little bit and see what happens. I really have no idea what's going to what's going to happen next.

Jacob Pleasants: So, let’s try on the educator lens. When it comes to music recommenders, everybody uses these things. Go to a typical classroom and pretty much all the students will have experiences with them. So, what do you want young people to learn about these systems? What's worth knowing?

Nick Seaver: I think there are 2 things. The first is that I know that there's a tendency for a lot of people (and I won't isolate this to young people) to imagine that these systems have capacities that they don't necessarily have. I think a lot of people imagine recommender systems as being able to access some deep truth about the self. I think a lot of people imagine that these systems have access to kinds of data that they don't have access to.

And who knows what the truth actually is, right? A funny thing about recommender systems is that there's no obvious or objective way for them to be “right.” There’s no equivalent to, “Well, the airplane flies.” There's not an obvious thing that you might point to. And so they're a great example of a very classic STS lesson, which is that successful technologies develop their terms of success. You become successful once you get people to agree that the thing that you do is what counts as being successful. That sounds like I’m doing circular reasoning, but it's not me. It's them!

I want to encourage people to think about the kinds of data these systems can access. There's a lot of conspiracy theories about how they work that I think lead people to underestimate just how effective certain kinds of data processing can be. The idea that in order to recommend you something, your phone must have been listening to you talk through the microphone versus just inferring something about you. That is actually not that hard to do given your behavior, who your friends on social media are, and all the other things that your phone is actually using. There's a kind of language that people use in recommender systems where they talk about the “magic” of collaborative filtering. But it’s actually a very simple system that says: if people listen to the same songs as you, you're probably going to like the other songs that they listen to. This produces results that are very striking, that seem like they know things about the content of the music, what it sounds like what you've been listening to, and they don't.

Okay, so that that's a long answer. But the first thing is these systems are not magic, and it's worth knowing what kind of data they tend to use.

The other thing that's important is kind of the basic lesson of the book, which is that when we talk about algorithms, we're talking about people. We're not talking about autonomous technical systems that make decisions on their own. So many things that people say are true of algorithms are actually true of the business models of the companies that run the algorithms. We are in a moment where we often want to change things about these systems, and people often say we need to get rid of the algorithms. But more often the thing that they want to get rid of is a particular form of optimization in the algorithm. Maybe you have a system that knows enough to put you into a filter bubble, to give you the same cluster of stuff over and over. But that means it also knows what's outside of your cluster, and it could give you those things. It doesn’t do that, not because it's an algorithm but because the people who design it want it to work in some other way. And so, I think keeping in mind that there are humans behind these systems is important because it’s a real point of purchase in terms of making critiques that change the way that things work.

Jacob Pleasants: So I'm curious about you. How do you engage with music recommendation these days? And do you feel like you have a healthy relationship with it, knowing what you know?

Nick Seaver: That's a good question. I use it all the time as a listener of music. I like to think that I use it in a way where I sort of understand what's going on and why certain things are happening. It's hard to argue with the fact that it is, you know, convenient. And I will say that the other thing that's going on here, and that a lot of people tend to not think about, is that music taste in the world before music recommendation was already shaped by various kinds of infrastructures for circulating music. You would find out about things from your friends, who found out about them from their friends, from radio, and so on. The alternative to recommenders isn't some organic and natural world, where we find music in an unproblematic way.

Arguably, we've been down this road for a long time ever since we had a music industry that circulated recordings. Think of some guy in the 1960s going to a record store and picking out one record among a selection of many. It’s based on taste because there’s a big building full of records that are identical to each other in cost and shape. The difference is in the content, what they sound like and what they say about you. But that kind of thing requires audio recording. You can't have musical taste like that if you've got to go listen to a band or play your own music. So, I absolutely think that our taste in music is being changed in some way by living in a world with recommender systems, because it can't help but be changed.

We can't say in advance how it's going to be changed. What exactly things are going to be like when people start to have preferences for certain kinds of musical artists, or when they encounter songs on a playlist and don't know anything about the artists. Maybe they care enough to look for something, maybe they don't. What is a healthy relationship to music recommendation?

There are some important issues that are not recommender specific. Streaming services, for instance, should probably pay more money to the people who make the music. But you could do that without changing the recommender. However, in other domains recommender systems are very clearly associated with harm. People talk about filter bubbles, and they may not be real empirically, but people are worried about the general sort of balkanizing process in play. People are worried about social media addiction, especially with video content.

Interestingly, music sort of gets a pass because you can listen to it in the background, which is actually the default thing that you're doing when you're using a music recommender system. For a while I was looking at the kinds of bills that people in Congress were putting out, saying, “We've got to stop these bad recommender systems from harming the youth of the nation.” This is an actual bipartisan issue in the United States, which is really interesting, too. What you'll see is that there's usually a carve out for music. I remember seeing one bill that would ban auto play, except in music. You're allowed to do it there, which is sort of interesting.

Jacob Pleasants: From a pedagogical point of view, then, maybe music recommenders are an ideal object of inquiry. Maybe they don't have intense emotional investment, or the stakes feel a little bit lower, maybe is the way to put it. And yet the insights apply to other things as well, like TikTok, that are very recommendation driven - those things that we as a society are more nervous about.

Nick Seaver: I really like teaching with music recommendation. I mean, obviously because it's working from my own material, but it does give you a lot of chances to let students engage with some of these issues in a space that feels a little bit more low stakes. I always feel obligated to say here that music can be very consequential. Cultural studies tell us that very important things play out in some of these spaces. But it's certainly different as a domain than some of these other areas. It's not targeting drones, and we're not organizing police activity. There's a sort of openness to experimentation. It's very handy in the classroom.

It was also handy for me empirically, talking to people building these systems. They are pretty willing to speculate on stuff and think outside the box because they weren't worried that they were going to accidentally endorse some really grotesque abuse of power. I think of music as a useful thing to include in this overall set of systems that we're concerned about. Not to replace them, but just because it does have certain different qualities that we might miss if everything we thought about algorithmic recommendation was based on social media feeds or something like that.

Jacob Pleasants: Are there particular pieces from the book that you find yourself using in class?

Nick Seaver:  I use it all over the place. There are a few spots that I think are handy. One in terms of ethnographic vignettes is that basic point about the humans in the system. I talk about the fact that when I went to do ethnographic field work there were people there to do it with. I think this is a funny thing, because it's obvious, and it doesn't really feel like much of a finding at all, because it isn't a finding. But it is important to take seriously and to remember that there are people in there, and they make decisions. And you know that these decisions can be kind of arbitrary; they're not doing them because rationality makes them. They're doing it for some other reason, and anything can happen.

There’s a chapter in the book that's about the history of evaluation in recommender systems. So, what's the metric that people use to decide whether a recommender system works or not? And I find it’s really helpful to put something really concrete in front of students like that. These early recommender systems from the mid 1990s had this fairly simple model of evaluating whether they were correct: we're going to guess the rating that someone is going to give to an item, and the closer our rating is to what they end up rating the item, the better our system works. But then that changes over time, and eventually transmogrifies into the system we see everywhere today, which is that a recommender system doesn't do that in any precise sense. Instead, it's usually tied to engagement metrics. If you make this tweak in the recommender, are people using the system for longer? Are they listening to more songs? Are they watching more videos, coming back on more days, etc.

This is a handy case because we can ask, “What are the consequences of this?” What happens if you try to optimize something like Netflix for watching as many movies as possible versus watching for as long as possible? Let’s say that watching as many movies as possible is your optimization target. Well, if someone's starting a lot of movies that might be a problem because they might not be finishing the movies. But if you optimize for how long they're watching Netflix, you might have negative consequences where what you're optimizing for is something that's not in the best interest of the of the person. It’s a problem when some of your users are watching Netflix for 24 hours a day.

So, I like bringing out those really specific examples of technological choices. This is really essential to an anthropology of technology. Our job is to take a domain that people usually think of as being mandatory in the sense that there are “correct decisions.” We go in and show how there are all these choices that people are making. There are options in the design and in how the systems are deployed. There are options everywhere, and those options are where social stuff happens. That’s where society is in these systems. And they're also the places where we can change things, where we can say, “Wait a minute. Hold on! Maybe we want to do something different than what we were doing.” It’s about recovering the optionality of technology.

Jacob Pleasants: I think it's so easy to just forget about decisions that were made upstream, or just assume that those decisions were the only ones that could have been made. As you as you've worked with this particular example, how have you gone about demystifying that decision making process?

Nick Seaver: I think there's a lot of ways to think about what informs the decisions that get made. We have some theories that suggest that some of the issues about these systems is that they're designed by demographically homogeneous groups of people who have not had occasion to consider a lot of kinds of potential harm that would be really obvious to someone who was in a different position from them, and I think those are really useful to keep in mind. I think one of the issues with pursuing that line of critique is that it leads to saying that we just need to include someone like that on the team, and then they're going to solve all our problems. Which is not a position you want to put a person in, in the first place, and it may not even work because that’s not the only thing going on.

There are lots of choices being made all over the place but they're not all being made independently of each other by completely autonomous agents. They are all making choices in a social and cultural context. And this is one of the things I’m trying to do in the book. As an ethnographer, I want to talk about the way this group of people thinks in general and learn about the sort of worlds in which these people imagine themselves. This will help you anticipate the kinds of decisions they're going to be likely to make in the future, because they're not going to make them randomly or arbitrarily. They're going to make them according to some kind of model of what they're trying to do, what they think is valuable, what they think the world is like.

Jacob Pleasants: There’s work out there that tries to hold a lens up to Silicon Valley culture, and it often tends to be very much a caricature. But what I appreciate in your book is that you give a very earnest impression of what these people believe. This is how they work. This is how they think. They might be somewhat naive in certain respects, but there's great importance to genuinely trying to understand the culture of the designers from an earnest and not cynical point of view.

Nick Seaver: The ethnographer’s job is to look at something and think about how it makes sense to the people who are doing it. To be honest, I don't think that everybody should approach these systems the way that I do. I think it's useful to have a variety of approaches. I do think that having extreme cynics out there is actually a very handy thing to have in the world. I think that in a lot of these discourses, critical and otherwise, there's a tendency to imagine that anything that we're saying about these systems, or how to approach them, is a guideline for how every person on earth ought to think about these systems. And I don't think that's true. One of the great things about living in a human society is that different people can do different stuff. The division of labor is very useful in a lot in a lot of ways.

Imagine if every day before you drove your car, you had to think about every decision that went into building your car and make sure you're properly critical about it. You couldn’t do anything. It's useful to not have to think about every last thing. But it's also useful to be able to open the black box occasionally, and think about how those things are built. And so it's nice to have people who are focused on various aspects of these systems. I think about my role as being charitable, not because I think that this is a group of people that needs charity. Lord knows, plenty of people are willing to just believe whatever comes out of their mouths as someone who knows how to program a computer. But sometimes by taking them more seriously than they take themselves, you can get into interesting critical territory.

I think the line I use in the book is that I can take these people more seriously and less seriously than they take themselves, because I’m not one of them. I can move around. I don't have to make a working recommender system. I can look at them using all of these trapping metaphors to talk about their customers, and go off and think about how their systems are, or are not, like other kinds of traps. Which is not a thing that the designers do on a regular basis. Or I can look at the fact that they may use all these “natural world” metaphors like saying that they are doing data gardening or talking about how machine learning is a kind of farming. And I can say, “Hold on, what does that mean? Let's get into it.” They may not be trying to say something really precise when they talk about these things, but I can pause the tape and go in there and see whether there's something interesting there. That, I think, is really my role.

Jacob Pleasants: Have any of the designers that you write about in your book read your research? Has it changed the ways that they think about things?

Nick Seaver: Yeah, I've actually heard from a couple of people who are in the book who have read it, which is always terrifying. You worry whether you've said something wrong. But they generally say that it sounds true, even some of the critical stuff. I will say that there's this interesting distancing thing that people can do, especially when your field research was a little while ago. It was the first half of the 2010s that I did most of the research here. So, they can say that they’ve changed. The environment now is very different in terms of popular critiques of algorithms. When I started, there was not a very lively popular discussion of anything algorithmic, whereas nowadays I think they're much more used to those sorts of criticisms. And so the things that are that are more explicitly critical in the book, I think they say, “Yeah, I can see how that's an issue.”

Does it change what they do? I'm not sure. I think it certainly is useful to change how people outside of these systems think about how they work and to think about how they might critique them differently at the end of the day. A lot of how these systems work is hard to change through discourse. So, I think it's an open question really. What's the best way to go about this? We have organizations that have ended up putting in, say, ethics groups. But then we see all these studies of what happens in these groups. How do they work? How do they change the way the companies work? And we see, unfortunately, that it’s maybe not a lot. It’s hard work to do, and they get shut down as soon as the interest rates change, right?  It's a very big and hard question.

Jacob Pleasants: So, what are you working on now?

Nick Seaver: From that work on recommender systems, I got really interested in questions about attention, how attention is measured about how attention is valued. A lot of these systems work in a context that we usually call the attention economy. This idea that collecting attention is the most valuable thing you can do, that it leads to money, that it's better than money, it's all these different things. And as an anthropologist, I'm really interested in big concepts like that and how they take over people's imagination about the social world. And so I’m working on this project now about how attention is understood and operationalized in certain technologies by a bunch of different people working in and around machine learning.

People may know that underneath the hood of most of these new generative AI systems is a kind of technical object called a transformer. That's what the T in GPT stands for. And inside of the transformer is a mechanism that's called an “attention mechanism.” And so there's this funny question of what's going on with metaphors of attention inside of the computer at this moment, where people are also really worried that the way these systems work is going to cannibalize human attention.

I'm looking at a bunch of different ways that attention is measured in different domains, like how long you're on a website, eye tracking, etc. I want to see what kinds of understandings of attention get embedded in these systems. Because it's really striking to me that in critiques of technology, we see attention as being this really important thing, as being human defining. And yet attention is also this thing that the people working in tech agree is really important, but in a different way. There's a kind of a competition about what this concept means at the present moment, despite the fact that it seems to be everywhere. So that's what I've been working on for a little while now, interviewing lots of folks in different spaces who work on attention in some broad sense.

Jacob Pleasants: So, when's that book coming out?

Nick Seaver: Computing Taste is my dissertation book, and you work on that for a really long time over and over again. I think it might be nice to write a book that I don't write a 100 times before I write it for real. That's what I’m hoping to do with this one.

Previous
Previous

Previewing the 2nd Annual (and free) Civics of Technology Conference

Next
Next

Why Black Mirror is Educational and Joan is Awful