Disney World could one day feature some of the most realistic animatronic characters on the planet, making your stay that much more magical. Imagine robots that can accurately follow your gaze while talking to you, raise their eyebrows, and even break eye contact like any other stranger periodically would.
Scientists at Disney Research, the network of labs supporting the company’s technological endeavors, have recently devised a new system for creating a lifelike robotic gaze.
By introducing minute “secondary behaviors” that humans exhibit in a conversation—from the flicker of the pupils between focal points, to the faint tilt of the head—the team managed to craft a machine that feels sort of human. The scientists presented their paper at the International Conference on Intelligent Robots and Systems last fall.
In effect, the humanoid robot comes off as incredibly lifelike, despite its face being mostly uncovered, exposing the electronics beneath. For now, that’s fine; Disney artists can enhance the face later, Doug Fidaleo, director of Disney Research Los Angeles, tells Pop Mech.
Fidaleo’s team is responsible for the hardware and software that could one day appear in Disney’s proprietary “Audio-Animatronics” figures, which the company uses to create repeatable live shows and experiences (like “It’s a Small World” and its 300 Audio-Animatronics dolls). So far, the results have been pretty convincing.
“I know the first time that I sat in front of [the robot], [I got] a little nervous, because you actually believe that this thing is alive,” Fidaleo says. “That threshold of feeling something, nervousness or something, is critical.”
It takes some serious finesse on the software side to build that sense of realism. The engineering team places most of the emphasis on transitions and blending, so from one moment to the next, there aren’t any real hard stops that might give away the animatronic figure’s true robot identity, effectively breaking up the experience.
If the robot is looking at one person, for instance, and then another child walks up to it, the animatronic figure can sense that with its onboard RGB camera, says James Kennedy, a research scientist with Disney Research Los Angeles. From the perception side, the robot gets a new set of coordinates to look at, and it will slowly transition its gaze to focus on that second child.
“Now, you have to make the decision of, ‘How do I go from where I am currently facing and get to these new coordinates?’ And, you know, as a robot, that’s a very simple problem,” Kennedy tells Pop Mech. “You can draw a straight line and do that. But that’s not particularly believable. People don’t move in that way. And so there are a lot of these small details that we do.”
Through the programs his team has built, Kennedy says it’s possible to dictate how long one glance should last before slowly blending into another motion. From there, the software is fit with rules for the acceleration and deceleration of the motors that control the robot’s neck, face, and torso along a particular curve.
For example, one of the most impressive features relates to saccades, or quick, simultaneous movements of the eyes between fixation points. Think about making eye contact during a job interview, when you’re probably most aware of your body language. Your eyes don’t remain static while looking at your future boss, but rather, they subtly dart back and forth.
So, if you wanted to have a staring competition with this animatronic bust, you’d probably win—and that’s by design.
“It would be quite unnerving for [the robot] to fixate on a single point on your face,” Kennedy says. “It’s something we could do technologically, but it would be quite unnatural to people.”
The researchers have programmed the robot to sort of mimic what the people in its line of sight are doing, from tilting its head in sync with guests, to blinking, and even subtly “breathing.” Engineers combine these motions in a few different states of being based on a “curiosity score” that records the number and type of stimuli in the surrounding environment.
In the robot’s default “read” state, it uses eye motions that make it seem like the figure is reading a book at torso level. In the “glance” state, the robot takes a look at the person of interest and tilts its head in the appropriate direction, just like a real person might shift if you distracted them while reading.
In the “engage” state, which is triggered by an even higher curiosity score, the robot looks at the person of interest while turning its head. Finally, there’s the “acknowledge” state, which the robot uses after it detects the person of interest is familiar, someone it has already interacted with.
This imparts the animatronic bust with what appears to be a strange sense of empathy. Humans, monkeys, and even birds have “mirror neurons” in the brain that fire when an animal sees another being performing the same action. Robots don’t have a biological framework like this on which to rely, so these four states of being are a close second.
Fidaleo is careful to note Disney hasn’t yet confirmed any future use cases for the realistic robot gaze.
Still, we do feel a little bit bad for the “It’s a Small World” animatronic dolls, which haven’t changed much since the 1960s. With these realistic robots on the horizon, they’re in for some rough competition. Maybe it’s time to start taking notes.