This week I’ve been listening to some podcasts about artificial intelligence and the one-year anniversary of the release of ChatGPT. To be honest, this isn’t a story I followed consistently in 2023. Not because I don’t think it’s important (I half-expected Time Magazine to make AI its “Person” of the Year1). Rather, it’s because the state of the art is moving so quickly that to keep up with it is to be consumed by it. Also, I’m on my computer the whole day long. No offense, Future Robot Overlords, but the last thing I want to think about after work is you.
I admit to being dazzled by the technology though. I’ve experimented with ChatGPT on several occasions. (Once I asked it to write me a story about how generative AI will destroy the world. It came up with a highly plausible scenario. Then it said to me, unprompted, “Huh, I hadn’t really thought about that before. Thanks.”)2
I can easily imagine this newest technology relieving some of the burdens of the last technology: my email inbox, for example. Yet a solution that solves one problem only by creating two new ones is no solution at all. The more advanced these technologies become, the more they seem to fall short as answers to the questions that really dog us. In his essay “Solving for Pattern,” Wendell Berry argues that good solutions accept limits and have wide margins of failure. They are also properly scaled, made by people with skin in the game, and are in harmony with the natural and cultural patterns in which they are contained. As far as I can tell, generative AI fails to meet even one of those standards.3
Human technology has probably always moved faster than human wisdom. Now more than ever it seems to be moving faster than human knowledge too.4 Something I’m learning is that even the people building AI don’t know it works. They do at a high level but not in its guts. I also heard two tech journalists say that if we paused AI development right now it would take five to ten years for society to adjust. But of course there will be no such pause. We are in permanent catch-up mode.
Artificial intelligence experts ask each other the question, "What's your p(doom)?" The "p" stands for probability. The “doom” stands for itself. The question the experts are asking one another is thus, "What's the probability that AI poses an existential risk to humanity?" Someone with a modest p(doom) of 5 is saying they believe there is merely a 1 in 20 chance that AI will usher in the apocalypse. I imagine Silicon Valley conferences with people milling about wearing name tags that read, “Hello, my p(doom) is…10.”
Turns out, I’m not far off. In an article last week, Kevin Roose, a tech columnist for The New York Times, wrote that asking someone for their p(doom) has become “a common icebreaker among techies in San Francisco — and an inescapable part of AI culture. I’ve been to two tech events this year where a stranger has asked for my p(doom) as casually as if they were asking for directions to the bathroom.”5
Some technologists can get quite granular with their (p)doomerism. Paul Christiano, one of the founders of ChatGPT6, makes a distinction between existential risk (having a bad future) and extinction risk (dying). His two subcategories of extinction risk are “dying now” and “dying later.” There is a subcategory of existential risk too, one Christiano calls “AI takeover,” in which humanity gives up control of its destiny to AI systems that don’t care about helping us. Back in April, Christiano organized his doom this way:
Probability of AI takeover: 22%
Probability that most humans die within 10 years of building powerful AI (powerful enough to make human labor obsolete): 20%
Probability that most humans die for non-takeover reasons (e.g. more destructive war or terrorism) either as a direct consequence of building AI or during a period of rapid change shortly thereafter: 9%
Probability that humanity has somehow irreversibly messed up our future within 10 years of building powerful AI: 46%
It's both alarming to me and objectively fascinating that AI is — as one New York Times headline put it — "being built by people who think it might destroy us." The argument, I suppose, is that we’d rather generative AI be built by people cognizant of its dangers than by people who aren’t. But why build it at all? I’ve heard some of the hopes for artificial intelligence — including increased productivity and a cure for cancer — but it’s hard not to shake the feeling that the main reason we’re creating AI is because it’s what’s next.
Why climb the mountain? Because it’s there. And more than that: because it’s the next mountain to climb. There’s an inevitability to it, almost as if humanity gave up control of its destiny to computers long before the advent of artificial intelligence.
We are solidly in what media theorist Neil Postman called the Technopoly. In a book of the same name — published in 1992, a year before America Online started mailing CDs to get people on the Internet — Postman wrote, “Technopoly is a state of culture. It is also a state of mind. It consists in the deification of technology, which means that the culture seeks its authorization in technology, finds its satisfactions in technology, and takes its orders from technology.”
I was talking about all this today with my wife, who works in education. She said schools are grappling with how to prepare students to succeed in a new AI world. But things are changing too rapidly; we're basically in Chapter One of a sci-fi novel right now. Paul Christiano — the cofounder of ChatGPT, he of p(doom)=46 — has said that the time it will take AI systems to create an "unrecognizably transformed world" will be measured not in decades but years, and perhaps even months.
Since no one can predict what kind of world lies ahead for our kids, the best thing schools — and families and communities — can do is to raise young people able to make wise decisions in any kind of world. That might mean living well with new technologies. It might mean deciding to live well without them.
For sure it will mean becoming, and teaching our children to become, what Neil Postman called “loving resistance fighters.” These are the people who will be aware of and resistant to the dangers, not of AI only, but to that spirit of the age: Technopoly. Among other characteristics, Postman describes loving resistance fighters as those
who refuse to accept efficiency as the pre-eminent goal of human relations
who are, at least, suspicious of the idea of progress, and who do not confuse information with understanding
who take the great narratives of religion seriously and who do not believe that science is the only system of thought capable of producing truth
who know the difference between the sacred and the profane, and who do not wink at tradition for modernity’s sake, and
who admire technological ingenuity but do not think it represents the highest possible form of human achievement.7
In Technopoly, which I’m reading for the first time now, Postman makes a convincing case that the inventor is often perilously bad at predicting the effects of his own invention. The inventor sees what his creation can do but not what it can undo. The questions inventors ask about a new technology — for example, “Will it make us more efficient?” — are thus distractions from the more fundamental questions: “How will this technology affect my community? My place?” “Who will be this technology’s winners and losers?” “How will the poor fare? The aged?” “Will this technology undermine the very meaning of the Real?”8
My sense is that even techies with a high p(doom) are only considering AI’s most obvious risks to humanity. Less obvious, but of grave concern, is how this new technology can’t help but alter what it means to be a neighbor, a friend, a human.
AI and Taylor Swift are both racing toward perfection.
ChatGPT didn’t actually say this last part.
Berry, a farmer and writer, was writing about solutions for agriculture. I can’t think of any coherent argument why the standards by which we judge agricultural solutions shouldn’t be the same standards by which we judge technological ones. Berry said as much in his essay: “To me, the validity of these standards seems inherent in their applicability. They will serve the making of sewer systems or households as readily as they will serve the making of farms…”
Every great wisdom tradition warns us not to go down this path. It’s almost like Jurassic Park was never made.
As you can guess, my AI p(doom) is high. It would be higher if I didn’t think humans perfectly capable of ruining the future in other ways first.
Christiano left OpenAI in 2021 to help start the Alignment Research Center, a nonprofit that works to align machine learning systems with human interests and develops ways of testing the dangers of AI models.
For these and other characteristics of the loving resistance fighter, see Chapter 11 of Technopoly.
Merriam-Webster’s 2023 Word of the Year was authentic: “A high-volume lookup most years, authentic saw a substantial increase in 2023, driven by stories and conversations about AI, celebrity culture, identity, and social media.”