Confessions of an AI Skeptic
The artificial-intelligence revolution is coming. Should we welcome or dread it?

I’ve wanted to write a post about Artificial Intelligence (AI)—specifically the Large Language Models (LLMs) and generative AI supposedly poised to revolutionize our world—for quite some time, probably a couple of years now. But until now, it hasn’t happened. Part of the reason is the relentlessness of the political news cycle—in the run-up to the 2024 election, and then its often-alarming aftermath—which has had me constantly concerned about keeping up with headlines.
But of course AI has been in the headlines throughout this period. I could easily have chosen a previous moment to set aside politics and reflect on the latest developments and their possible ramifications for our economy, culture, and even self-understanding as a species. But I’ve stayed mum on the topic, for complicated reasons.
For one thing, I really try to avoid getting swept up in hype—and I’ve rarely seen a hype cycle more intense than the one cheering on and raising alarms about AI. Note that the hype moves in both directions. AI boosters and its Chicken Littles agree about one thing: the new technology, just a few short years away from reaching a standard of “superintelligence,” is going to be momentous—it will transform life and work in unprecedented ways or it may even produce the destruction of the human race at the hands of its own creation. I’d like to see both sides taken down a peg, and my instincts tell me that’s exactly what’s going to happen.
But the truth is I don’t know enough about AI to make the case with any kind of authority. I mean both that I know next to nothing about how it works under the hood and have spent remarkably little time working or playing on the front end with the various generative AIs currently available. I’ve done a bit of that and find it mildly interesting and impressive, but not something I’m especially excited about. Other than interacting with Google’s AI by posing research questions in search queries and sometimes appreciating the results, I’ve never used AI in any serious way to write a post, craft a lecture, or construct an exam or paper prompt, let alone do anything else.
Why would I? I do what I do—write and teach—because I love, and am good at, doing both. I don’t want or need an artificial mind and digital helpmeet to do it for me. Yes, I do this work to make money, and I guess it would be nice, in a way, to earn that income while having to spend less time working. But the work isn’t drudgery from which I long to be liberated by technology. On the contrary, the work—producing the essays and lectures—is the primary means whereby I figure out what I think about the world. I need to do the work myself in order to make sense of what’s going on around me.
I don’t normally consider myself—my distinct set of talents, my distinct blend of money-earning activities—exemplary in any way. It’s very hard to generalize from my idiosyncratic career-path to anything or anyone else. Yet from the standpoint of the looming AI revolution, the work I do is indistinguishable from other cognitively sophisticated forms of what sociologists call “knowledge production”—which AI seems poised to “disrupt,” as the gurus of Silicon Valley like to put it, though it’s probably more accurate to say “render superfluous.”
Skeptical about that being possible? Consider this: A few weeks ago, Rod Dreher started a post about an angry column David Brooks wrote about something Patrick Deneen published more than a decade ago. (I know, it’s kind of ridiculous that I’m now adding yet another layer to this series of writers talking to and at each other: Damon Linker on Rod Dreher on David Brooks on Patrick Deneen….) Anyway, after producing a couple of lengthy quotes from the Brooks column, Dreher block-quotes from what he describes as his own eight-paragraph response. It reads as a typical Dreher post in both substance and style. Only at the end of the long quote does Dreher reveal the truth:
One of this newsletter’s readers asked ChatGPT to come up with a Rod Dreher response to the Brooks column. That’s what it produced. It is eerily like what I would have written, had I set out to do so.
This is more than a little unnerving to me.
You and me both, friend. Which is why I’ve penned this post—to make sense of that unnerved feeling, identify its sources, and explain why I come down where I do on the value of AI. I realize the topic is vastly bigger than the cross-section I examine in this post. I’ll undoubtedly return to write about different dimensions of the subject in the future.
Keep reading with a 7-day free trial
Subscribe to Notes from the Middleground to keep reading this post and get 7 days of free access to the full post archives.