A Good Week for Technoskepticism
By Jacob Pleasants
Today’s midweek blog post is a bit of a news roundup, inspired by a collection of items that have been showing up in my media feed. Each is interesting on its own, but I think they also function as an interesting ensemble. They tell a story of (Ed)Tech criticism on the ascent, and an industry that seems to be entering a decadent phase. Let’s start with the most decadent:
Grammarly
On Wednesday March 11, Grammarly CEO Shishir Mehrotra announced the end of a feature that caused a fiery backlash. The feature, available in the paid subscription, allowed you to get “expert” feedback on your writing from, supposedly, one of many real people, living and dead. From Joan Didion, to… Casey Newton? Odd choice.
A class action lawsuit has now been filed. Obviously, trying to profit off of “personas” of actual people is absolutely worthy of a lawsuit. The word chutzpah seems rather fitting. But more than that, the whole feature was simply absurd. Jane Rosenzweig made good sport of documenting the rather weak and generic advice that the so-called “experts” provided.
It would be one thing if this were coming from yet another foolish AI startup. If that were the case, we could laugh/cry at the VC money that went into such a silly venture and move on. But Grammarly is a product that is heavily used. Even if this particular feature was only used by a small fraction of Grammarly users (it was behind a paywall, after all), it speaks volumes to how this company approaches its role as a “writing assistant” more generally - and the impact it can have. Grammarly does not need to actually assist anyone with their writing. It just needs to make people (students) believe that it is useful. It just needs you to believe that the AI can give you Joan Didion-esque feedback. How would you really know what her real feedback would be like, anyway?
The real question: What will the fallout be for Grammarly? Will this misadventure convince some users to set it aside?
Autocomplete
Also on Wednesday, and also on the subject of AI “assisted” writing, we got a new research study from a Cornell team in Science Advances. It’s really worth reading the full study, but the short version is that they were looking at how autocomplete can change what people think.
Here’s the basic mechanism: when autocomplete fills in the rest of a sentence, it’s pretty easy to look at that filled-in text and say “yeah, that’s what I want to say.” And when you adopt that writing as your own, you also adopt the content of what was written as reflective of your own thinking. Yeah, sure, the computer completed it, but that’s just an efficiency thing. The words on the page are still yours.
The research study tested this by having participants write about socially significant, controversial issues (e.g., should the death penalty exist?). They provided treatment group participants with a biased autocomplete that suggested text aligned with specific positions on those issues. The participants could choose to accept or reject the suggestions. After the writing task, they were asked to report their beliefs about those issues. As predicted, when participants got those biased autocompletes, their beliefs tended to align with the biases of the autocomplete. The real kicker is that most people did not believe the autocomplete to be biased, and did not believe that the autocomplete had any effect on them.
This, of course, is a contrived situation in which people are writing about a polarizing issue and the autocomplete is intentionally biased. Most of the time, autocomplete is being used in much more quotidian contexts, and isn’t intentionally biased (we hope). But that should not actually give us any comfort. Technology is never neutral. We don’t need a nefarious tech company to sneakily turn the bias knob for autocomplete to affect us. And it’s the everyday kinds of interactions that are probably the most consequential. Those little autocompletes add up, and it’s easy to write off each one individually. Surely, adding a few words here and there isn’t changing my thinking!
The Anti-(Ed)Tech Movement Grows
One more Wednesday happening: Jennifer Berkshire published a lovely piece on the growing movement against tech in schools. I am not sure that the anti-EdTech sentiment has truly reached mainstream status, but we are certainly seeing more stories about it in mainstream journalism. On Tuesday the New York Times published a story about the screen time battles playing out in schools. On Wednesday, we got an opinion piece from the NYT’s Jessica Grose on how “Teens are Falling Out of Love With Tech.” The alliances being formed around this issue are interesting.
Berkshire explores how we got to this moment, linking the mass adoption of EdTech to the school reform movements of the 2010s. EdTech, she argues, is ideological (it’s not neutral!). The tech products that permeate our schools are inextricably linked to the ideologies of school reform. Also, the tech companies have figured out how to profit quite nicely by catering to (and stoking) those ideologies.
The Ensemble Cast
Collectively, these items give me a sense of just a little bit of growing momentum. The behemoths of AI and EdTech aren’t stopping anytime soon, but maybe they’re slowing just a little bit.