Economy, business, innovation

This AI Warning Is A Myth; The Danger Is Not…

This AI Warning Is A Myth; The Danger Is Not…

Authored by Kay Rubacek via The Epoch Times,

You know this story.

Drop a frog into boiling water, and it will scramble out immediately. But place that same frog in cool water, heat it slowly, and degree by degree, it will never notice the danger until it is too late.

Most of us accept this without question.

The problem is that the story is not true.

It traces back to a German physiologist named Friedrich Goltz, who in 1869 conducted a series of experiments with a rather unusual purpose: to determine whether the soul resided in the brain or the spinal cord. He removed portions of a frog’s brain and observed what the animal could no longer do without it.

He found that a frog without its brain would sit placidly in slowly heating water and not attempt to escape. However, a normal frog, with its brain intact, would feel the rising temperature and get out.

That finding was passed around over the decades that followed, stripped of its context, and reshaped into the cautionary tale we now all repeat. 

later biologists confirmed the original finding: A frog in cold water will jump out before it gets too hot.

The frog that stays in hot water is the one that can no longer think for itself.

We have been repeating that story for more than 150 years as settled truth, because it felt right, without ever stopping to ask whether it was actually true.

We accepted a false warning about the danger of not noticing gradual change, without noticing that the warning itself was false.

That should give us pause on its own.

But this month, it became more than an interesting historical footnote when a team of researchers from Carnegie Mellon University, the University of Oxford, MIT, and UCLA published a landmark study on how artificial intelligence (AI) is affecting human cognition.

The findings are fascinating, but the metaphor they chose caught many people’s attention.

They wrote of the boiling frog.

Scientists studying the effects of AI on the human mind described how the human cost of using AI could be “analogous to the ‘boiling frog’ effect, where each incremental act feels costless, until the cumulative effect becomes overwhelming to address.”

They were describing something that doesn’t arrive in a single dramatic moment, but degree by degree, use by use, in the ordinary decisions of ordinary days.

Whether knowingly or not, they used a story about an animal that only stops trying to escape once you remove its ability to think.

In their study, the researchers gave participants a series of mathematical reasoning and reading comprehension problems to solve. One group had access to an AI assistant throughout. The other worked alone. Then, without warning, the AI was removed, and everyone was tested independently on the same problems.

The AI group performed significantly worse.

That result, perhaps, is not surprising.

What is surprising is this: Every participant in the experiment had a skip button.

There was no penalty for using it, no reward for pushing through. The choice to try or to give up was entirely their own.

The AI group chose to skip at nearly double the rate.

This was not an inability to solve the problems.

It was an unwillingness to try. After just 10 minutes of having an AI system handle every moment of difficulty, something had changed in the participants’ choices. The researchers give a name to what was lost: “desirable difficulties.” It is a term from cognitive science that describes the productive struggle that, in the moment, creates a challenge and, over time, is the process by which human beings learn, grow, and develop capabilities.

The discomfort of not knowing, the resistance of a hard problem, the effort required to work through something without being handed the answer—these are not obstacles to learning. They are learning. And AI, which is designed to be maximally helpful in the immediate moment, removes them every single time.

The concerning part is not that AI makes people less capable in any permanent or measurable sense; it is that it makes people less willing to try. It erodes the willingness to push through a challenge—the very foundation on which intelligence is built and maintained. A person who never lifts anything heavy does not lose the biological capacity for strength overnight.

This is what the researchers meant when they invoked the boiling frog metaphor. They were not predicting any single catastrophic failure. They were observing an accumulation of small human surrenders.

There is a generation growing up right now who is living this on a daily basis. A recent Gallup poll found that 42 percent of Gen Z respondents believe that AI is harming their ability to think carefully and will make it harder for them to learn in the future.

These are children and young adults, still with developing human brains, forming their cognitive habits, their tolerance for difficulty, and their relationship with struggle, inside an environment that has been optimized to remove all of those things for them as efficiently as possible. The researchers warned explicitly of the risk of creating a generation that has lost the disposition to struggle productively without technological support. That is not a distant possibility. It is a trajectory in motion.

The irony at the center of all of this is that we have spent 150 years repeating a false story about how humans fail to notice gradual danger. We have repeated it uncritically, without checking, because it felt familiar and instinctively correct. Now the scientists documenting the most significant gradual cognitive shift of our time have reached for that same false story to name what they are seeing.

It’s time to rewrite the boiling frog story.

The frog with a functioning brain gets out of the water before it gets too hot.

That capacity to feel the rising temperature, to recognize what is happening, and to choose to respond is not a small thing. It is, in the context of this moment, very nearly everything.

The question worth asking is not whether we are using AI. Most people already are, and that will not change. The question is how we respond to the rising temperature.

Tyler Durden
Tue, 04/21/2026 – 17:00

Scroll to Top