The Puppet and the Puppeteer: Why Our AI Panic is Misdirected
The spectacle is mesmerizing. A few words typed into a box conjure sonnets, code, hyper-realistic images. Generative AI is the dazzling toy dangled before us—intellectuals, artists, students—and as any magician knows, the key to the trick is misdirection. We're staring, transfixed and arguing, at the puppet, while the puppeteers—the Big Tech oligopoly—quietly secure the strings that will shape the future of creative work.
The critical reaction has been fierce but largely misplaced. We judge the large language model for its bland prose, its ethical failures, its tendency to "hallucinate." We debate whether it's truly intelligent or merely a sophisticated pattern-matcher. These conversations matter, but they're also a distraction. They assume the central danger is the tool itself. This is the error. The tool is not the problem. The problem is who controls it, and to what end.
The danger isn't that we're learning to use a new technology. The danger is that we're so mesmerized by its novelty that we're missing how it's being deployed: to deskill workers, to extract and enclose the commons of human expression, and ultimately to replace human creativity not because the replacement is better, but because it's cheaper and more controllable.
To see this clearly, we need to shift perspective. The philosopher Gilbert Simondon argued that technologies become alienated from us when we misunderstand their function—when we see only what they're sold as, not what they do within a given system. AI is sold as your co-pilot, your creative partner, liberating you from drudgery. But look at the business model, not the marketing. These systems aren't designed to make you a better writer or thinker. They're designed to create a world where the most economically viable option is to stop creating from your own lived experience and instead generate a cheap simulacrum. The goal is to make you—the human—the most expensive and expendable part of the process.
This isn't about whether AI can write a good poem. It's about building an economic system where writing a good poem—a flawed, human, emotionally urgent poem—matters less than generating ten thousand mediocre ones. The real fight isn't against the algorithm. It's against the logic that wields it: the logic that says efficiency matters more than meaning, that scale trumps depth, that your messy, irreplaceable creativity is a problem to be solved rather than the source of what makes us human.
The trap is that we keep debating the puppet's performance while ignoring who's pulling the strings and why. We argue about whether the output is "real art" while companies lobby to gut copyright protections. We marvel at the technology while gig workers churn out data for pennies and writers watch their livelihoods evaporate. The tool itself is almost beside the point.
So what do we do? We stop staring at the toy. We look at who's holding the strings and ask what game they're really playing. We demand transparency about training data and real regulation, not industry self-governance. We build new models—economic and social—that reward human creation rather than its synthetic replacement. We learn to use these tools, yes, but we refuse to let their logic become our own.
The future of creative work—of human thought itself—depends on whether we can see past the trick.