Why Sal Khan’t: On Learning by Making but Teaching by Telling
Civics of Technology Announcements
Ghosts in the Machine AI Documentary Screening: Join our Civics of Tech community for an online screening of the new documentary, Ghosts in the Machine on Thursday, May 14, 2026 at 7:30pm EDT. Attendees will watch the screening separately and then be invited to a Zoom discussion with participants in the documentary including Thema Monroe-White. Civics of Tech has paid for the screening and you can join us clicking the “Get Tickets” button on this page. However, we have limited tickets so please only reserve tickets if you are sure you will be able to attend. We can add tickets if they sell out. Learn more on the documentary website.
Book Club: We’re also reading Disabling Intelligences: Legacies of Eugenics and How We are Wrong about AI by Rua M. Williams. Join us on Wednesday, May 20th at 7:00 PM EST. You can register here and purchase the book here.
New Review: Read Tina Chung’s review of Yuval Noah Harari’s Nexus.
Conference Info Coming Soon: We are excited to share the details, so stay tuned!
Today’s blog is a cross-post from CoT Board Member Punya Mishra. The original post appeared on his blog (which you can find here) on April 16, 2026. This version includes a coda by Jacob Pleasants, Marie Heath, and Punya Mishra that speaks to subsequent developments related to Khan Academy.
Two pieces crossed my feed recently, both about Sal Khan and the AI tutoring revolution that wasn’t. The first was Matt Barnum’s reported piece in Chalkbeat, where Khan himself acknowledged that Khanmigo, the AI chatbot tutor he launched three years ago with world-changing ambitions, was “a non-event” for most students. “They just didn’t use it much,” Khan said. His own Chief Learning Officer, Kristen DiCerbo, put it even more plainly: “So far I am not seeing the revolution in education.”
The second was Dan Meyer’s sharp obituary on LinkedIn, titled “RIP Khanmigo & Edtech Industry Dreams of AI Tutors.” Meyer traced the whole arc: the TED talk predictions, the philanthropic subsidies, the increasingly aggressive way Khanmigo inserted itself into the student experience (because students wouldn’t seek it out voluntarily), and the steadily shrinking user projections. His conclusion was blunt: if Khanmigo died with every advantage in the world (early OpenAI access, Microsoft backing, government subsidies, Sal Khan’s Rolodex), what hope should the rest of the edtech industry place in chatbot tutors?
These are important pieces, and I’d recommend reading both. But reading them, I found myself thinking about a deeper question. Not whether the revolution failed (it clearly did) but why it was never going to work in the first place.
To explain I have to go back a bit in time, back during my days at MSU, when I was out there giving talks about technology integration and the critical role played by the teacher in this entire process. And further about the significance of students actively constructing representations of their understanding.
So in these talks I used to show a clip from the Charlie Rose show (see below). It’s an interview where Khan describes how he prepares to teach a new topic. And it’s wonderful. Here’s Khan on learning about, say, Napoleon and the French Revolution:
“I approach it from what my brain would like to see… I like to see a scaffold, I like to see a map… what is the Holy Roman Empire, like where, what is that now?”
He reads Wikipedia first, “just to get the scaffold.” He draws timelines. He copies maps and pastes them onto his digital blackboard. And then he does something genuinely important: he pushes past the surface until he hits the questions that textbooks skip. Here’s Khan on the neuron:
“A biology book will tell you okay the signal goes across because there’s a myelin sheath and I’m like yeah but how does putting a little tissue around a neuron, how does it make the signal go faster? And no biology book will tell you that answer.”
So what does he do? He ponders. He thinks it through by analogy (fiber optics, signal amplification). And then he calls up friends who are biologists or communications engineers and asks: “Does this make sense?” Sometimes they confirm his intuition. And sometimes, beautifully, they say: “You know what, we don’t know.” Khan’s response to that is perfect: “Why didn’t the book tell me that?”
I used to play this clip and then ask the audience a simple question: Look at everything Sal Khan does to learn something. He reads widely. He scaffolds. He draws. He questions. He calls friends. He argues. He makes connections across fields. He builds intuitive understanding from the ground up. And then he builds something to capture all that he had learned to share with others.
Now… how does Sal Khan want my kid to learn?
Watch a video.
The room always got it immediately.
Because nobody in that room would accept that for themselves. We all know, intuitively, that watching someone else explain something is not the same as understanding it. We would never settle for that as learners. And yet, somehow, we accept it as a solution for other people’s children. This is a version of a phenomenon that I have written about earlier:The reductive seduction of other people’s problems.
But there’s a second layer that I think is even more important, and that neither the Chalkbeat piece nor Meyer’s critique quite names. Khan’s personal learning wasn’t just active. It had a purpose. He was learning in order to make something. The video was his construction, his artifact, the thing he was building. That’s what pulled him through the hard parts, through the myelin sheath question and the calls to friends and the hours of immersion. He had a destination.
Students watching the video have no such destination. They’re receiving the product of someone else’s learning process. And when Khanmigo came along, the revolution was… a chatbot to help you receive more efficiently. Still no purpose. Still no making. Still no reason to push through difficulty. No wonder DiCerbo reported seeing more “IDK IDK” than substantive engagement. No wonder teachers at early-adopter schools found that students “didn’t really care for the bot.” Why would they? There was nothing at stake for them.
This is where John Dewey, writing over a century ago, becomes useful. Dewey argued that learning is built on four natural impulses: the impulse to inquire, to construct, to express, and to communicate. He saw these not as skills to be taught but as drives already present in every learner, drives that education should work with rather than suppress.
Go back to the Charlie Rose clip and watch Khan through this lens. He is living all four. Inquire: the relentless “why” questions, the refusal to accept surface explanations, the “why didn’t the book tell me that?” Construct: the timelines, the maps, the blackboard drawings, the scaffolds he builds for himself. Express: the video itself, Khan giving form to what he’s understood. Communicate: calling up buddies, testing his ideas against other minds, discovering together what is and isn’t known. And then building a representation of his learning, with his own unique voice and style, and sharing it with the world. Inquiry, construction, communication and expression—all in one go! Intermingled so well that it is difficult to tell them apart.
All four impulses, firing beautifully.
And none of them available to the student on the other end.
Khan’s great error, I think, was not a failure of effort or sincerity. It was a failure of educational imagination. He experienced the full richness of learning and then designed a system that offered students only the residue. He gave them the destination without the journey. And because the journey is where motivation lives, where purpose lives, students quite reasonably declined the offer. First they declined the video (or rather, passively consumed it). Then they declined the chatbot. In both cases, the diagnosis was the same: nobody had given them a reason to care.
And this is what I think the edtech world keeps getting wrong. The assumption is that if you can deliver the right content, in the right way, at the right time, learning will follow. It won’t. Not without purpose. Not without the impulse to inquire, construct, express, and communicate. Not without, in Dewey’s sense, the learner actually doing something.
Teachers know this. It is, in fact, a large part of what teachers do: take something a student is not yet interested in and create the conditions that make them interested. Not through tricks or gamification but through the design of experiences that activate those Deweyan impulses. That is the work. And it is work that no video, and no chatbot, has figured out how to do.
I’ve written recently about how evolution’s answer to an unpredictable world was not “more data” but play, and about how children are optimized not for pattern-completion but for exploration. The connection to the Khan story is direct. Khan Academy, and then Khanmigo, are both autocomplete strategies: one autocompletes explanation, the other autocompletes tutoring. Neither makes room for the exploration, the construction, the messy purposeful making that is where learning actually happens.
Khan himself seems to have arrived at something close to this realization. “I think our biggest lever is really investing in the human systems,” he told Barnum. That’s a remarkable sentence from someone who has spent nearly two decades trying to improve education by routing around the humans. Whether his benefactors in the technology industry will be as excited to invest in human systems as they have been in software that tries to replace them… that remains to be seen.
Note: I don’t usually bring TPACK into my blog posts. I mean, how much weight can a Venn diagram carry? But this might be the cleanest case I’ve ever seen. Khan has technology knowledge in spades. He clearly has deep content knowledge (that Charlie Rose clip is proof enough). What he has never had is pedagogical knowledge: an understanding of how people learn, what motivates them, what makes the difference between someone who pushes through difficulty and someone who types “IDK.” The circle marked P in the Venn diagram. That is where the humans live. Content and Technology mean nothing without that.
Coda
A few days after Sal Khan admitted that his predicted revolution failed to materialize, he announced "a new model for college." In his blog he explained:
"We’re building this as a collaboration between three nonprofit organizations that share a mission to expand access to learning:
Khan Academy, with a mission to provide a free, world-class education for anyone, anywhereTED, which has spent decades sharing ideas that shape how we understand the worldETS, with deep expertise in measuring skills and opening doors through assessmentTogether, we’re exploring what a new model of higher education could be and what it could make possible for learners around the world.We’re also working with corporate thought partners like Google, Microsoft, Accenture, Bain, McKinsey and Replit to help ensure the skills learners build here connect to real opportunities and real-world needs.Through TED’s global network and convenings, learners will engage with ideas and conversations shaping the future of work and AI”
So, maybe in this iteration of the revolution, students will be working towards an end that includes making and not just receiving content.
Maybe.
But what sort of vision is likely to come from “corporate thought partners” like Google and Microsoft, who have promised many of their own “revolutions” in education? What educational vision is likely to come from management consultancies?
And what of this collaboration with ETS, with its claim to “opening doors through assessment?” It’s a strange sort of logic to imagine that the entity responsible for so many gatekeeper exams is actually the one who is “opening doors.”
It appears that Khan’s latest venture continues his approach to revolution through contradictions. The greatest contradiction of all may be that despite his repeated failures to make good on his promises, the funds and corporate partnerships only seem to expand.