It’s Elementary. The Problem With Artificial Intelligence Agents

Posted April 29th, 2016 at 11:55 am (UTC-4)
Leave a comment

A man takes pictures with humanoid robot Jiajia produced by University of Science and Technology of China, at Jiajia's launch event in Hefei, Anhui province, April 15, 2016. Jiajia can converse with humans and imitate facial expressions, among other features. b (Reuters/China Stringer Network)

A man takes pictures with humanoid robot Jiajia, produced by the University of Science and Technology of China, at Jiajia’s launch event in Hefei, Anhui province, April 15, 2016. Jiajia can converse with humans and imitate facial expressions, among other features.  (Reuters/China Stringer Network)

The rapid advance of artificial intelligence (AI) technologies could change the landscape in health, education and social interaction. But it is also a cause for concern in the absence of safety regulations or a code of ethics to govern the use of humanoids.

They are already on the job – AI helpers in factories, hotels, shopping malls, in self-driving cars, and at home conversing with children or helping the elderly. More will appear online to engage social media users and offer content and services.

This photo, distributed by Feature Photo Service for IBM, shows visitors to the Hilton Hotel in McLean, Va. meeting the robot concierge  'Connie', named after Conrad Hilton and powered byIBM Watson and WayBlazer.  Connie uses cognitive computing and machine learning to answer questions posed in natural language while learning from the interactions. (AP/Green Buzz Agency/Feature Photo Service for IBM)

This photo, distributed by Feature Photo Service for IBM, shows visitors to the Hilton Hotel in McLean, Va. meeting the robot concierge ‘Connie’, named after Conrad Hilton and powered by IBM Watson and WayBlazer. Connie uses cognitive computing and machine learning to answer questions posed in natural language while learning from the interactions. (AP/Green Buzz Agency/Feature Photo Service for IBM)

AI systems are programmed to make decisions and react based on certain models of human behavior. In many cases, they are harmless, as with Google’s AlphaGo, which is programmed to play the ancient, intuitive board game of Go and IBM’s Watson, which once played Jeopardy and chess, but now tackles education and disease, among other things.

But the range of AI applications is expanding quickly. China just unveiled an electrically charged security robot that comes with an SOS button, but also zaps protesters should the need arise – whatever that may be.

What could possibly go wrong?

Bilge Mutlu, associate professor at the University of Wisconsin at Madison, dismissed the notion that AI systems could go rogue or even surpass human intelligence in the forseable future.

Today’s robots are “incapable of reasoning about the world using their own understanding systems,” said Jonathan Mugan, co-founder and CEO of Deep Grammar. That makes them unlikely to understand that “we’re enslaving them,” he added jokingly, or to “rise up” against their human masters as they might in a science fiction movie.

“Even if they are able to understand what we tell them to do, are they going to do it in a way that we want them to do it?” asked Mugan. “So they may understand the task, but not understand that certain things are off-limits in order to achieve that task.”

That might be a problem.

A self-driving car, for example, could come against “some weird situation” that wasn’t programmed into the system. “They don’t know what to do,” he added, “because they don’t have this knowledge as a human to fall back on.”

A photo provided by the Santa Clara Valley Transportation Authority in California shows damage to a public bus after a self-driving Lexus SUV, operated by Google, collided with it in Mountain View, Feb. 14, 2016. Scrapes are seen in front of and behind the door, and a piece of the car is stuck in the door. (AP)

A photo provided by the Santa Clara Valley Transportation Authority in California shows damage to a public bus after a self-driving Lexus SUV, operated by Google, collided with it in Mountain View, Feb. 14, 2016. Scrapes are seen in front of and behind the door, and a piece of the car is stuck in the door. (AP)

That underlying domain knowledge, gained through evolution and experience, gives humans what Mugan calls a “grounded understanding of the world” around them.

“Even our most profound intelligences go all the way down to this kind of physical world embodied knowledge that we have,” he added. “And the problem right now is we haven’t figured out how to give robots this kind of embodied knowledge … Nobody has figured how to do that.”

That deficit was painfully apparent when Microsoft released its teen chatbot Tay.ai last month. Tay was subsequently withdrawn after Internet trolls taught it to repeat all sorts of hate speech while it was live.

Developers typically confine the amount of learning an AI system can do to the laboratory so it doesn’t of get “out of whack,” said Mugan. And Mutlu added that designers should be able to predict potential malicious use or things that could go wrong while the program is in development.

“If you design things well,” he said, “they will not misbehave.”

A good designer, he added, follows an iterative, user-centered process that corrects and adjusts as long as necessary.

“Once I put it out there,” he added, “what are the responses I’m going to get? How do I deal with users who are playing with the system? How do I deal with Malicious input?”

As AI programs become more prevalent, Mugan suggested a safety code might be needed, particularly in determining liability if an AI system causes damage or harm, or if it makes a decision that impacts unprogrammed targets, making human intervention necessary.

“That’s the kind of case where our social values kind of take over what the machine would do otherwise,” he said.

Discussions are underway in the research community to formulate guidelines for artificial agents that have social interactions with people. And Mutlu suggests that developers with a code of ethics that requires designers to think about potential malicious use of their creations should build that into the design process

“That will definitely make the design stronger,” he added. “But if you don’t have that … you might be inviting hackers, hacking and all this stuff.”

On the other hand, ethical considerations might turn out to be so important that they form “the core of what we build in the future so that we will never find ourselves in that situation,” he said. “We don’t know what the future is going to hold and what our values are going to be in 10-20 years.”

Aida Akl
Aida Akl is a journalist working on VOA's English Webdesk. She has written on a wide range of topics, although her more recent contributions have focused on technology. She has covered both domestic and international events since the mid-1980s as a VOA reporter and international broadcaster.

Leave a Reply

Your email address will not be published. Required fields are marked *