Daily Bulletin Logo

Why Everyone is Having the Wrong Nightmares About AI

Wednesday, December 3, 2025

By Nick Klenske

Humans may excel at abstract thinking, problem solving and communication, but one thing we’ve never been very good at is predicting the future—especially when that future involves technology.
Zeynep Tufekci, PhD, MA
Tufekci

Just take the perennial prediction that radiologists will be replaced by AI. Here at RSNA 2025, the halls of McCormick Place are again full of humans, without a cyborg in sight. “Clearly, what people think, fear, and hope for a new technology and what actually happens tend to be very different,” said Zeynep Tufekci, PhD, MA.

An internationally renowned techno-sociologist, New York Times opinion columnist, and professor of sociology and public affairs at Princeton University, Dr. Tufekci’s work analyzes the intersections of science, technology, politics and society.

Speaking at a Tuesday plenary session, Dr. Tufekci noted that there’s a pattern in how humans miss the mid- to long-term impact of transformative technologies—a pattern that can be traced back centuries. “When the printing press was invented, the Catholic church predicted it would produce more Latin bibles, preserve the faith, and sell more indulgences,” she said. 

History, however, turned out a little differently. 

Instead of preserving Catholicism, the printing press enabled the Reformation, with figures like Martin Luther using it to mass-produce pamphlets in vernacular languages, many of which circulated new ideas and critiqued church practices. 

The result was a fracturing of the Catholic church.

“This is a prime example of how humans usually view a novel technology as being a newer, faster, better version of something that already exist,” Dr. Tufekci explained.  

A Car is Not a Horse

Another example of humans getting the future wrong can be found in the early days of the automobile. “We framed the car as being a new type of horse, but a car is not a horse,” Dr. Tufekci noted. 

Because we were using the wrong benchmark, Dr. Tufekci said our fears were focused on the horse—are cars faster than horses, will they spook horses, etc. “Instead, we should have been talking about how cars will increase pollution, create demand for fossil fuels and trigger suburbanization,” she said. “While the horse was put out to pasture, these issues went on to have profound, long-term impacts on the world.”

This example highlights how, when it comes to predicting a new technology’s impact, humans tend to forget about scale. “Replacing a horse with a car is one thing, but when you replace 100,000 horses with 100,000 cars you change the way a system functions—and that is where the nightmares begin,” Dr. Tufekci said.   

New Technology, Same Old Story

Fast forward to today and Dr. Tufekci sees history starting to repeat itself. “When we discuss AI, we’re again using the wrong benchmarks and are ignoring the consequences of scale,” she said.

As to the former, Dr. Tufekci noted how AI is currently being framed as a better, faster type of human intelligence when in fact it is an entirely new type of intelligence. “Whether a machine is better than a human is the wrong benchmark; the question is whether it is good enough to use at scale,” she said. 

According to Dr. Tufekci, when the answer to that question becomes ‘yes’, the problems could begin. “In my opinion, AI’s existential threat isn’t that it’s going to terminate humanity; it’s that it’s going to make some things too easy,” explained Dr. Tufekci.

To illustrate, she pointed to the humble high school essay. As Dr. Tufekci explained, these essays are used to teach students how to think, argue and write in an advanced way. “The pain is the point,” she said. “When AI removes that friction, makes things easier, we risk destabilizing the entire system.”

Judgement Day or 1984?

Dr. Tufekci said she’s worried about what society will do to maintain stability and accountability.

“We worry about AI killing off humanity, but my AI nightmare is less Terminator and more Big Brother,” she said.

To illustrate, she pointed to how some classrooms are maintaining accountability by requiring students to write essays while on camera. “When you scale this scenario to a world where AI has rendered all types of proof—video, images, testimonials— untrustworthy, the anecdote could be extreme, centralized surveillance,” said Dr. Tufekci. 

Despite this rather bleak prediction, Dr. Tufekci reminded the audience that humans don’t have a very good track record at predicting the future—herself included. 

“Believe it or not, I’m actually optimistic about the future and am genuinely excited about the potential benefits AI can bring,” she concluded. “But enjoying those benefits requires that we start having serious conversations about what we want from this transformative technology.”

Access the plenary, “Everyone is Having the Wrong Nightmares: AI's True Threats,” (T4-PL05) on demand at RSNA.org/MeetingCentral