Image courtesy of yui.kubo / Flickr
Tesla Motors founder Elon Musk compares the increased focus of large tech companies on developing artificial intelligence, or AI for short, to the summoning of a demon. Microsoft CEO Bill Gates and world-renowned theoretical cosmologist Stephen Hawking emphasize the need for control and regulation of a synthetic super-intelligence. In a Reddit “Ask Me Anything” online forum, Gates writes, “I am in the camp that is concerned about super intelligence. First the machines will do a lot of jobs for us and not be super intelligent. That should be positive if we manage it well. A few decades after that though the intelligence is strong enough to be a concern. I agree with Elon Musk and some others on this and don’t understand why some people are not concerned.”
For Jairus Grove, an assistant professor at University of Hawai‘i at Mānoa who shares concerns about the possibility of a future sentient machine, the emphasis is more on its simplicity. “An extraordinarily simplistic version of intelligence would be a remarkable thing,” he says. “But having a consciousness in an infinitely large and small space would dramatically alter its perception of reality and could make it unpredictable and dangerous.”
The sum of concerns for the world’s most intelligent minds is rooted in the possible birth of an emergent sentient mind within machines that would quickly replace the human race on the totem pole of evolutionary species. They believe that the danger is imminent because of the growing ways in which humans and machines are now intertwined. From complex tasks to minuscule distractions, we eat, sleep, work, and play according to our technology. We are nearly cybernetic in every possible way.
David Chin, the Chair of the Department of Information and Computer Sciences at UH Mānoa, understands how entwined the human race is with machines and its possible implications. “It’s good to think about these possibilities, because while these complex advances may be far off, they would affect literally everything in our day to day life when it comes to fruition,” he says.
Chin knows that these while the algorithms designed for simple tasks are benign by themselves, when coupled with dangerous policy and given free reign, they can become something entirely beyond human control. Grove agrees: “The egos driving these advances are sometimes dangerous and irresponsible. … Ultimately, as with all doomsday-type scenarios, the strength of our leaders and the egos of those who have developed this malevolent entity would have to regulate its capabilities to ensure our survival.” Or, in the face of global catastrophe, an ubiquitous cyber-intelligence could placidly analyze their zeroes and ones and come to the logical conclusion. Mankind is not fit for the future, and the future is now.