The Limits of Artificial Intelligence


John Stonestreet

Timothy D Padgett

In 2018, comedian John Mulaney closed out his opening monologue as host of Saturday Night Live with this quip about one of the strangest aspects of today’s world which didn’t exist just a few years ago: “You spend a lot of your day telling a robot that you’re not a robot.”

Think about it. Artificial intelligence is one of the “new normals” of contemporary life. Every time we access data on the web, every customer service call we make, and every ordering process we start involves not just using, but communicating with, a machine. Smart phones, smart cars, smart networks—artificial minds are now the gatekeepers of information, transportation, and commerce.

But, how smart are they?

In sci-fi, the story always ends with computers evolving past and outclassing human minds. Sometimes they’re dangerous; sometimes they’re helpful; and sometimes, most unsettlingly, they cannot be differentiated from humans. Lurking behind the fantasy is an important question: What happens if we create something that’s smarter than us? Still, computer engineers and neuroscientists continue to push science fiction to science fact.

The problem with these efforts, a recent article in the online magazine Salon notes, is that the quest for artificial intelligence tends to “treat intelligence computationally.” Attempts to recreate and even surpass the purely computational abilities of the human brain have succeeded. Computers can now play games and analyze images faster and better than humans.

At the same time, there’s real doubt as to whether machines are anywhere near matching wits with their creators. According to a piece last year in The Guardian, “Despite the vast number of facts being accumulated, our understanding of the brain appears to be approaching an impasse.”

For example, it’s estimated that about 95 percent of brain activity involves what are called spontaneous fluctuations, or neural impulses, independent of both conscious thought and outside influence. That’s a problem that shuts machines down. As the Salon piece puts it, “For computers, spontaneous fluctuations create errors that crash the system, while for our brains, it’s a built-in feature.”

Uniquely human thought arises from this chaos, unpredictable and unreproducible. What we think  of as intelligence—reason, logic, and processing—may instead be the end result of consciousness, not the means of achieving it.

While Salon’s analysis is helpful, it misses something essential. Their analysis assumes that the mind and the brain are identical, that there’s nothing more to our minds than “meat.” While this is a common assumption of a naturalistic worldview, it’s a worldview that will never be big enough to explain human cognition, much less motivation and behavior.

David Gelernter’s analysis, given 20 years ago after the chess playing program Deep Blue beat the world’s top player, says it better:

How can an object that wants nothing, fears nothing, enjoys nothing, needs nothing and cares about nothing have a mind? … What are its apres-match plans if it beats Kasparov? Is it hoping to take Deep Pink out for a night on the town? It doesn’t care about chess or anything else. It plays the game for the same reason a calculator adds or a toaster toasts: because it is a machine designed for that purpose.

Or as philosopher Mortimer Adler noted over thirty years ago: “[T]he brain is not the organ of thought … an immaterial factor in the human mind is required.” We’ve made great strides in understanding certain elements of our biology as well as our ability to imitate certain behaviors with machines. But, it’s just that. Only an imitation.

As Gelernter put it, “Computers do what we make them do, period. However sophisticated the computer’s performance, it will always be a performance.”

The more we learn of the brain and of human consciousness, the more it affirms that humans are not just meaty machines.


  • Facebook Icon in Gold
  • Twitter Icon in Gold
  • LinkedIn Icon in Gold


Artificial intelligence research may have hit a dead end

Thomas Nail | Salon | April 30, 2021

Why your brain is not a computer

Matthew Cobb | The Guardian | February 27, 2020

The Oxford Handbook of Spontaneous Thought

Kalina Christoff & Kieran C.R. Fox | Oxford Handbook Online | 2018


Have a Follow-up Question?

Related Content