A team of York University researchers has been leveraging its expertise in artificial intelligence (AI) and robotics to create empathetic robots that can help in search-and-rescue missions, act as security guards and manage patients in busy health-care settings.
As part of their work, Michael Jenkin, a professor of electrical engineering and computer science at York's Lassonde School of Engineering, and his colleagues have been exploring how to build trust between people and machines.
“We ask questions like, ‘What shape should a robot be? Should a robot have a face on it? If a robot has a face on it, is it important what the face looks like?’” Jenkin says. “If you were going to talk to a robot or talk to a kiosk, what should it look like? Should it be big or small? Do people prefer text or is audio better?”
The group has also been looking at how to build robots that can respond to humans in ways that emulate empathy, using AI and machine learning to teach the robot to read a person’s emotional state and respond accordingly based on their mannerisms and the speed at which they speak.
What they found is that people come with their own prejudices, and they apply those when talking to people and robots. For example, in general, people prefer robots acting as security guards to look like men. In health support roles, they prefer their robots to look like women.
“We have lots of choices in the design space,” Jenkin says. “What's the right one?”
In one collaboration with industrial partners, including an Ontario-based firm called Cloud Constable, Jenkin and the team looked at how to build a robot that can act as a security guard in public spaces, diffusing potential conflict.
“We considered things like, does this kind of robot need a face? If the robot has the ability to provide lethal and non-lethal force, who’s going to take responsibility when it goes wrong? Or, even if it goes right, who’s responsible?” Jenkin says.
As AI revolutionizes how people collect and process data, and transforms how we work and live, Jenkin says researchers like him – all over the world, in every sector and service – are thinking through questions like these as they try to build machines people are comfortable interacting with.
He points to a recent experiment in Europe that probed whether people would accept hugs from robots. The trial looked at whether people prefer their robots in this situation to be padded to replicate the human body and how long people are willing to stay in the embrace of a machine. Spoiler alert: not long.
Jenkin, who is part of York U’s Connected Minds project that is focused on ethical use of AI, says there are, of course, issues of privacy and security to consider as machines leverage our personal data to appear more human.
Something else to consider is whether using machines to solve problems is just masking deeper societal issues that should be addressed.
Robots, for example, can be programmed to act as companions to isolated seniors, sitting with them while they eat, as evidence suggests this encourages people in homes to eat more and stay healthy longer. But is that the answer to the broader concern about how we treat our most vulnerable?
“This space is in its infancy,” Jenkin says. “There are problems that we have as a society that robots can address. Addressing the fundamental problem is much more desirable, but until we have that fixed, maybe AI can provide a stopgap measure and be part of the solution in that way.”