Nautilus

Why Robot Brains Need Symbols

Nowadays, the words “artificial intelligence” seem to be on practically everyone’s lips, from Elon Musk to Henry Kissinger. At least a dozen countries have mounted major AI initiatives, and companies like Google and Facebook are locked in a massive battle for talent. Since 2012, virtually all the attention has been on one technique in particular, known as deep learning, a statistical technique that uses sets of of simplified “neurons” to approximate the dynamics inherent in large, complex collections of data. Deep learning has powered advances in everything from speech recognition and computer chess to automatically tagging your photos. To some people, it probably seems like “superintelligence”—machines vastly more intelligent than people—are just around the corner.

The truth is, they are not. Getting a machine to recognize the syllables in your sentence is not the same as it getting to understand the meaning of your sentences. A system like Alexa can understand a simple request like “turn on the lights,” but it’s a long way from holding a meaningful conversation. Similarly, robots can vacuum your floor, but the AI that powers them remains weak, and they are a long way from being clever enough (and reliable enough) to watch your kids. There are lots of things that people can do that machines still can’t.

I tried to take a step back, to explain why deep learning might not be enough, and where we ought to look to take AI to the next level.

And lots of controversy about what we should do next. I should know: For the last three decades, since I started graduate school at the Massachusetts Institute of Technology, studying with the inspiring cognitive scientist Steven Pinker, I have been embroiled in on-again, off-again debate about the nature of the human mind, and the best way to build AI. I have taken the sometimes unpopular position that techniques like deep learning (and predecessors that were around back then) aren’t enough to capture the richness of the human mind.

That on-again off-again debate flared up in an unexpectedly big way last week, leading to a huge Tweetstorm that brought in a host of luminaries, ranging from Yann LeCun, a founder of deep learning and current Chief AI Scientist at Facebook, to (briefly) Jeff Dean, who runs AI at Google, and Judea Pearl, a Turing Award winner at the University of California, Los Angeles.

When 140 characters no longer seemed like enough, I tried to take a step back, to explain why deep learning might not be enough, and where we perhaps ought to look for another idea that might combine with deep learning to take AI to the next level. The following is a slight adaptation of my personal perspective on what the debate is all about.

You’re reading a preview, subscribe to read more.

More from Nautilus

Nautilus7 min read
The Part-Time Climate Scientist
On a Wednesday in February 1938, Guy Stewart Callendar—a rangy, soft-spoken steam engineer, who had turned 40 just the week before—stood before a group of leading scientists, members of the United Kingdom’s Royal Meteorological Society. He had a bold
Nautilus8 min read
A Revolution in Time
In the fall of 2020, I installed a municipal clock in Anchorage, Alaska. Although my clock was digital, it soon deviated from other timekeeping devices. Within a matter of days, the clock was hours ahead of the smartphones in people’s pockets. People
Nautilus9 min read
The Marine Biologist Who Dove Right In
It’s 1969, in the middle of the Gulf of California. Above is a blazing hot sky; below, the blue sea stretches for miles in all directions, interrupted only by the presence of an oceanographic research ship. Aboard it a man walks to the railing, studi

Related Books & Audiobooks