When we talk with someone, words aren’t the only thing that impact our listener. Other subtle factors – such as tone of voice, body language and eye contact – also have powerful communicative potential.
Bilge Mutlu, a computer scientist at the University of Wisconsin-Madison, understands and appreciates the power of nonverbal communication.
The professor calls himself a human-computer interaction specialist. His work involves taking characteristics of human behavior and replicating them in robots or animatronic characters.
Mutlu is leading a team that’s developing and creating various computer algorithms based on how people communicate without words. These algorithms are then used to program devices, like robots, to look and act more human-like, helping to bridge the gap between man and machine.
A person’s gaze is one of the facets of nonverbal communication Mutlu has found to be especially interesting.
“It turns out that gaze tells us all sorts of things about attention, about mental states, about roles in conversations,” he says.
For example, if you focus your gaze on a specific individual while talking to a group of people, it communicates that what’s being said is especially relevant to that individual.
Research also shows when you finish saying something in a conversation and your gaze is directed to one particular person, that person is likely to take the next turn speaking in the discussion.
These nonverbal cues tell people where our attention is focused and what we mean when we direct a question or comment in a conversation.
When people really mean what they’re saying, they might open up their eyes and look at who they’re talking to and really try to communicate their message or thought through facial and other cues.
To convert these subtle cues of human communication into data and language that can be used by a robot, Mutlu’s team takes a computational approach. They break down each human cue or gesture into minute segments or sub-mechanisms – such as the direction of the eyes versus the direction of the head or how the body is oriented – which can be modeled.
Then, certain temporal dimensions are added to the model. These characteristics include the length of time a target is looked at and whether the gaze is focused on the face or should be directed elsewhere after a time.
If he’s designing a robot for an educational role, Mutlu incorporates these nonverbal behaviors. The research team has found learning improves when a robot teacher uses these cues, as opposed to a robot that doesn’t have these abilities.
Mutlu’s goal is not to duplicate a human being in robot form, or have them mimic people on a one-to-one basis. He wants to find the key mechanisms which help us communicate effectively, reproduce them in robots, and enable these systems to connect with us in the way we humans communicate with each other.
Professor Bilge Mutlu joins us on this week’s radio edition of “Science World” to talk about how we all stand to benefit from his work. Tune in (see right column for scheduled times) or check out the interview below.
Other stories we cover on the “Science World” radio program this week include:
- ‘Titanic’ director recreates 1960 voyage to the deepest part of the ocean
- Astronomers map Jupiter’s moon Io
- With a scarcity of doctors, poor and remote areas of India still rely on faith healing
- The International Space Station’s robot astronaut
- Current plans to limit global warming are unlikely to prevent major sea-level rise
- Strategy to develop safe and effective tuberculosis vaccines unveiled in South Africa