Not all robots necessarily emulate the human anatomy. And those who do have facial expressions among their weakest points, which are generally limited to a “poker face.” Anthropomorphic robots have grown in presence in recent times, as a resource of assistance for certain industrial activities or specific services.
Following an interest cultivated in recent years, researchers from the Laboratory of Creative Machines of the School of Engineering of Columbia University, in the United States, presented EVA, a new autonomous robot with a soft and expressive face that responds to the expressions of the nearby humans, the product of five years of work.
A robot that responds by imitating facial expressions
Given the advance of the “humanization” of service robots, the interest in creating a robot that has an expressive and receptive human face arose in the Columbia research team.
From the outset, the initiative put a great challenge on the table, considering the complexity of the human face, its skin, bones and its more than 42 muscles, on the one hand, while on the side of the material resources to be used, in Robotic materials tend to use rigid, heavy and bulky materials, qualities that contribute little to the needs of this case.
The solution to build EVA as a sufficiently compact and functional system came from the hand of 3D printing, which made it possible to manufacture parts with complex shapes, which perfectly fit the figure of a human skull.
As reported in its official presentation, EVA can express the six basic emotions of anger, disgust, fear, joy, sadness and surprise, as well as a series of more nuanced emotions, through the use of artificial “muscles” made up of cables and motors, that trigger specific points on EVA’s face, imitating the human face.
This robot, whose plans were released under an open source license, acts as a reflection of what it can capture from the expressions of other nearby human faces, thanks to artificial intelligence. Through deep learning techniques, EVA undergoes its own “trial and error”, reviewing videos of its past performance. The ability to evaluate her performance was acquired after watching a real person looking at herself on a video call.
Outside of mimicking gestures in their most basic sense, the research team recognized in presenting their advances that the movements involved are too complex to be restricted to the domain of a predefined number of rules. In social life, there are contextual conditions, which do not necessarily respond to a pattern, which make it more difficult to approach these situations from automation.
For example, if EVA imitates a smile, she will do so without knowing if she responds to a smile of genuine joy or rather, one generated by nerves. However, far from any public or commercial use, at the moment this is just a laboratory experiment that could lay the groundwork for future more complex uses of this technology.
“Robots are intertwined in our lives in an increasing number of ways, so building trust between humans and machines is increasingly important”said Boyuan Chen, lead author of the study that compiles the research around EVA.