Robots can be helpful as mental health therapists at work, but how well they are perceived largely relies on how they appear.
26 employees took part in weekly sessions for four weeks that were guided by two distinct robot wellbeing coaches as part of a study that the University of Cambridge researchers conducted in a digital consultancy firm. The robots’ physical appearance had an impact on how participants engaged with them, even if their voices, facial expressions, and scripts for the sessions were similar.
Participants who worked with a toy-like robot for their wellbeing exercises reported feeling a stronger connection to their ‘coach’ than those who worked with a humanoid robot. According to the study, popular culture, where the only limit to what robots may achieve is the human imagination, has an impact on how people perceive robots. When faced with a robot in the real world however, it often does not live up to expectations.
The toy-like robot may have been easier to communicate to since participants had reduced expectations due to its simpler appearance. Since the humanoid robot was unable to engage in engaging dialogue, participants realized that their expectations did not match reality.
The researchers claim that their study demonstrates that robots might be a beneficial tool to boost mental wellbeing in the workplace, despite the discrepancies between expectations and reality. The results will be reported today (15 March) at the ACM/IEEE International Conference on Human-Robot Interaction in Stockholm.
The World Health Organization (WHO) recommends that employers take action to promote and protect mental wellbeing at work, but the implementation of wellbeing practices is often limited by a lack of resources and personnel. Robots have shown some early promise for helping address this gap, but most studies on robots and wellbeing have been conducted in a laboratory setting.
The most common response we had from participants was that their expectations of the robot didn’t match with reality. We programmed the robots with a script, but participants were hoping there would be more interactivity. It’s incredibly difficult to create a robot that’s capable of natural conversation. New developments in large language models could really be beneficial in this respect.
Professor Hatice Gunes
“We wanted to take the robots out of the lab and study how they might be useful in the real world,” said Dr. Micol Spitale, the paper’s first author.
The researchers worked with a nearby technology firm, Cambridge Consultants, to develop and implement a robot-based workplace wellness program. Over the course of four weeks, employees were guided through four different wellbeing exercises by one of two robots: either the QTRobot (QT) or the Misty II robot (Misty).
The QT is a childlike humanoid robot and roughly 90cm tall, while Misty is a 36cm tall toy-like robot. Both robots have screen faces that can be programmed with different facial expressions.
“We interviewed different wellbeing coaches and then we programmed our robots to have a coach-like personality, with high openness and conscientiousness,” said co-author Minja Axelsson. “The robots were programmed to have the same personality, the same facial expressions and the same voice, so the only difference between them was the physical robot form.”
A robot in an office meeting room led participants in the experiment through several positive psychology exercises. Each session began with the robot asking participants to recall a happy memory or to list something for which they were grateful. The robot then followed up with more questions.
Participants were asked to evaluate the robot using an interview and a questionnaire after the sessions. Participants completed one session per week for four weeks while interacting with the same robot.
Participants who worked with the toy-like Misty robot rather than the child-like QT robot said they felt a stronger working relationship with the former. Participants also had a more positive perception of Misty overall.
“It could be that since the Misty robot is more toy-like, it matched their expectations,” said Spitale. “But since QT is more humanoid, they expected it to behave like a human, which may be why participants who worked with QT were slightly underwhelmed.”
“The most common response we had from participants was that their expectations of the robot didn’t match with reality,” said Professor Hatice Gunes from Cambridge’s Department of Computer Science and Technology, who led the research. “We programmed the robots with a script, but participants were hoping there would be more interactivity. It’s incredibly difficult to create a robot that’s capable of natural conversation. New developments in large language models could really be beneficial in this respect.”
“Our perceptions of how robots should look or behave might be holding back the uptake of robotics in areas where they can be useful,” said Axelsson.
Participants reported they found the wellbeing activities beneficial and that they were open to the notion of conversing with a robot in the future, despite the fact that the robots used in the experiment are not as sophisticated as C-3PO or other fictional robots.
“The robot can serve as a physical reminder to commit to the practice of wellbeing exercises,” said Gunes. “And just saying things out loud, even to a robot, can be helpful when you’re trying to improve mental wellbeing.”
The team is now working to enhance the robot coaches’ responsiveness during the coaching practices and interactions.
The Engineering and Physical Sciences Research Council (EPSRC), part of UK Research and Innovation (UKRI), supported the research. Hatice Gunes is a Staff Fellow of Trinity Hall, Cambridge.