You will have to blend your own smoothies for a while longer, as human-robot communication shows to be complicated.
Communicating with robots is not easy. We humans have a presumed knowledge about our surrounding, the world, which simplifies human-human communication.
We know that the cucumber changes color when we’ve peeled it.
We can predict how the apple will look once it has been cut into pieces.
We know we have to use a knife in order to cut something.
This is the kind of knowledge that robots lack. And we need to teach them. In order for that to work, human-robot communication is crucial, Dr. Joyce Chai, professor at Michigan State University, said in her speech ‘Language to Action: Towards Interactive Task Learning with Physical Agents.’
“We know that strawberries need to be put somewhere in order to be cut, and that certain tools might have to be involved in this action. We know we have to take the lid of the blender before we put the fruit in. The robot doesn’t have this kind of background knowledge," she said.
“So it’d be very difficult for the robot to understand what these actions mean, and to learn the task structure.”
Dr. Chai spoke about language communication with robots at the IJCAI, International Joint Conference on Artificial Intelligence, July 13, 2018, in Stockholm, Sweden.
“Fifteen years ago, when I started my [Natural Language Processing]-group, I would not have dreamed of having a physical robot in my lab,” she said.
Dr. Chai works with robots, teaching them basic tasks by applying semantics.
“Before I started working with robots, I thought robots were smart. But then, after I started working with them, I realized they’re not very smart at all, okay.”
One of the most important ingredients in communication is common ground. In order for humans to communicate we have to have an aligned view of the situation. For example, If you are gonna discuss how great dinner was last night, all people in the conversation must have been present at the dinner. Someone who was not there will not understand what you are talking about. The communication will be difficult because all participants lack the same common ground.
The same goes for human-robot communication. So, how do we teach robots to do things, and is there an efficient way?
According to Dr. Chai there are two ways. Either the human performs the task, and the robot learns from observation, or the human gives instructions, and the robot learns by performing. Most often, it is a combination of both, she said.
“Natural-language communication plays an important role”
By using action verbs, for example ‘cut,’ Dr. Chai hopes to calculate different outcomes and teach the robot to act. So that the next time the robot hears ‘cut’ it can find a reference of what to do thanks to learning how to cut an apple, for example.
“This kind of knowledge is so fundamental, and presumed among humans, so we can see that there will be complications,” she said.
As many other researchers in her field, Dr. Chai looks at infants and their language acquisition. She references cognitive psychologist Dr. Michael Tomassello, whose theory is that children acquire language as a byproduct of social interaction.
She and her team therefore expanded the robot’s abilities by allowing it to ask questions in case it didn’t know how to do something.
To see a demonstration of how the robot came to interact, press here.
“I believe the robot’s internal representation will have to be rich, sufficiently rich and interpretable, so that we can bring humans and robots together, to a joint understanding of the situation.”
For those of you who want to read more on Dr. Chai and her team’s research and findings, please press here.
Comentarios