People are social creatures.
Robots… less so. However, if science fiction has taught us anything, it’s that we crave emotion, even in our robots – think C-3PO or Star Trek’s Data. So it stands to reason that if robots are ever going to become a fixture in our society, even becoming integrated into our households, we need to be able to read their faces. But how good are we at reading robot faces? Scientists at Georgia Tech decided to test our ability to interpret a robot’s “emotions” by reading its expression to see if there were any differences between the ages. They found that older adults showed some unexpected differences in the way they read a robot’s face from the way younger adults performed. Said Jenay Beer, graduate student in Georgia Tech’s School of Psychology: “Home-based assistive robots have the potential to help older adults age in place. They have the potential to keep older adults independent longer, reduce healthcare needs and provide everyday assistance.”
Beer and two senior colleagues used a virtual version of the iCat robot to test the differences among adults between the ages of 65 and 75, and 18 to 27. They had the virtual iCat exhibit seven emotions at various levels of intensity: happiness, sadness, anger, fear, surprise, disgust and neutral, then tested how well each participant could read the emotions of the virtual iCat. The study found that older adults had diffi culty recognising happiness, and they would often confuse a happy expression with the neutral expression of the robot. Beer reasoned that the result could be due to the difference in the way a human actually expresses an emotion and the way it’s exaggerated in art. “It may be due to the ‘cartoon’ look of the iCat, with the downturned mouth being very prominent,” she said.
Source: Georgia Tech