Will Humans Be Able to Control Computers That Are Smarter Than Us?
If humans go on to create artificial intelligence, will it present a significant danger to us? Several technical luminaries have been open and clear with respect to this possibility: Elon Musk, the founder of SpaceX, has equated it to “summoning the demon”; Stephen Hawking warns it “could spell the end of the human race”; and Bill Gates agrees with Musk, placing himself in this “concerned” camp.
Their worry is that once the AI is switched on and gradually assumes more and more responsibility in running our brave, newfangled world—all the while improving upon its own initial design—what’s to stop it from realizing that human existence is an inefficiency or perhaps even an annoyance? Perhaps the AI would want to get rid of us if, as Musk has suggested, it decided that the best way to get rid of spam email “is to get rid of humans” altogether.
No doubt there is value in warning of the dystopian potential of certain trends or technologies. George, for example, will always stand as a warning against technologies or institutions that remind us of Big Brother. But could the anxiety about AI just be the unlucky fate of every new radical technology that promises a brighter future: to be accused as a harbinger of doom? It certainly wouldn’t be unprecedented. Consider the fear that once surrounded another powerful technology: .
You’re reading a preview, subscribe to read more.
Start your free 30 days