Hardly a day goes by without news about a breakthrough in machine intelligence or some debate about its pros and cons, more recently between Facebook’s Mark Zuckerberg and Tesla Motors’ Elon Musk. Adding his voice to the mix, author and IT specialist Peter Scott warns that rapid AI growth comes with serious risks that, if mitigated, could take humanity to a new level of consciousness.
If we build ethical artificial intelligence and it becomes superintelligent, it could become our partner
In Crisis of Control: How Artificial SuperIntelligences May Destroy or Save the Human Race, Scott, a former contractor with NASA’s Jet Propulsion Laboratory, argues that there are two risks associated with rapid AI development. If these dangers are successfully mitigated, they “will propel us into a new utopia,” he said. Failing that, they could lead to the “destruction of the human race.”
The first risk is that AI could put biological weapons and weapons of mass destruction in the hands of average people “so that someone in their garage could create a killer virus that could wipe out millions of people.”
The second is that as the technology becomes more prevalent, someone could accidentally or deliberately cause a disaster through internet networks connecting global infrastructure. This “crisis of control,” as he calls it, is “whether we can control what we create.”
“Will we be able to control the results of this technology, the technology itself?” he asked. “There’s always been a debate about technology going back to at least the atom bomb, if not the sword, but the further we get, the more volatility there is because of the large-scale potential effects of this technology.”
There have been multiple revolutions throughout history that changed the way people lived and worked. But Scott said this time is different.
Where do we go from there? What’s left? There really isn’t much room about that in what you would call a ‘hierarchy.’
One could argue that humans still need to program and maintain their intelligent machines. “But that is also a knowledge-transfer function,” said Scott. “The point at which machines learn that job will transform the world in an instant because they will do it much, much faster. And the big question is when will that happen?”
That could be in 10 or 50 years. Whenever it happens, humans need to come up with a new basis for employment that hasn’t been done by machines, he said. “And it’s very hard to see what that might be in an era where machines can think as well as a human being.”
Alarm bells already are sounding off about the risks of automation to human workers. Scott predicts AI will take over jobs “traditionally associated with the pinnacle of employment development” such as chief executive officer, chief technology officer, and chief finance officer. It will take longer to automate jobs like therapists and psychologists that require sensory skills, and acute understanding of the human psyche, grounded in human experience
But the process has already begun, with AI systems like IBM’s Watson already tackling complex medical problems. And the “boundaries of what we call artificial intelligence keep getting moved,” he said. AI, which was little more than “parlor tricks” back in the 1980s, now extends to chatbots,
humanoids like China’s Jiajia robot, and voice assistants holding a conversation with humans – the stuff of science fiction.
Science fiction writers have already tackled some of these dilemmas. In the 1940’s, prominent science fiction writer and biochemist Isaac Asimov introduced the Three Laws of Robotics to govern the creation and ethics of intelligent machines.
There are similar efforts underway to create a set of AI ethics. In January, a group of AI experts came up with The Asilomar Principles, 23 statements they agreed upon on how to create ethical artificial intelligence.
But it’s not just about ethics. “A new renaissance of the study of the human heart” is needed, said Scott, to deal with the threats of not just machine intelligence but people who could wreak havoc if they get their hands on this technology. Given enough attention and funding, he said the next revolution will be in “human consciousness.”
His hope is that professions that “repair wounds in the human heart” will evolve in partnership with an ethical AI to develop medicines more quickly and cure cancer, disease, aging, and perhaps “have something to teach us in psychology, in philosophy, ethics as well.”
“If we do that, then we will be able to coexist on a planet that has a new species of silicon beings that are many times more intelligent than us.”
One response to “Can We Control Our Intelligent Machines?”
l question whether Al could lead to an “average person” developing a bio-weapon. First, he would need to know what the Al is telling him. Second, he’d need access to the necessary equipment. Third, obtaining the workable bio-material.
Another area of concern is about robotic autonomy. lf l have a robot house assistant –cleaning, laundry, … — then then the only ‘threat’ to my physical safety is directly related to the robot’s externally applied horsepower and anatomically-equivalent range-of-motion. There could also be no-go strips.
One more point. Displacement of workers. Who? When Al is applied to an engineering problem there still needs to be an educated, technically trained person to understand what the Al is displaying/saying. Ever got confused over assembly directions?