In this article, Kelsey Piper tackles a persistent claim that keeps circulating in prestigious publications: that AI language models are "just" next-word predictors, stochastic parrots, or "spicy autocomplete." She argues this framing — while containing a kernel of truth about one stage of how models are trained — has become a form of "highbrow misinformation" that leaves the public less equipped to understand what AI actually is and what it can do today. Drawing on hands-on demonstrations and a useful concept borrowed from climate discourse, Piper makes the case that it's time to retire this particular talking point, regardless of where you land on the broader questions about AI's impact.
00:00 - Introduction
03:53 - How language models work
12:36 - It’s 2026, and AIs can do complex tasks independently





