When Robots Smile Back 

When-Robots-Smile-Back

In the field of human–robot interaction, autonomous animation has emerged as a crucial field for the design of social robots to be useful, intuitive and user friendly.

If you entered the lab at Columbia University a few months ago, you’d witness the robotics team making funny faces. In May, scientists at New York’s Columbia University showcased a robotic head they had built that mimics the facial expressions and movements of nearby humans. The autonomous robot is called Eva and has pliable blue skin with cables and motors that replicate the movements of more than 42 tiny muscles in the human face. Eva can express anger, disgust, fear, joy, sadness, surprise, to name a few emotions.

The researchers aimed to develop a responsive, facially realistic robot in order to build trust between man and machine to aid in instances where man and machine may have to work closely together. Eva is part of a breed of social robots designed to make humans comfortable and respond to social interaction. Such robots show tremendous potential for application in healthcare, education and in elderly care. The most well-known social robot currently in development is Sophia developed by Hanson Robotics. Sophia is a social humanoid robot that can display more than 50 facial expressions, and is the first non-human to be given a United Nations title. Sophia can recognise human facial expressions using a convolutional neural network— a type of model designed to resemble the way the human brain processes visual information. The model was trained on datasets consisting of tagged photographs of seven emotional states: happiness, sadness, anger, fear, disgust, surprise, and neutral. With the help of chest and eye cameras, Sophia is able to use her pre-trained neural network model to recognise a person’s facial expressions. SoftBank Robotics has developed multiple social, semi-humanoid robots frequently used in research, including Pepper and Nao. Pepper is used both commercially and academically, as well as being used by consumers in over a thousand homes in Japan.

The complex interplay of expressions

One of the most important aspects in the field of social robotics is communication between humans and robots. Humans communicate through complex languages and use different types of nonverbal cues such as haptics (touch), kinesics (body language), proxemics (personal space) and paralanguage (tone and pitch). For social robots, kinesics is particularly important as facial expressions play an important role in getting people around them to trust them to help. Between human interactions, the complex interplay of facial expressions, eye gaze, head movements, and vocalisations has an enormous influence on how a situation will unfold. Children learn such things subconsciously, and it lays the foundation for successful social and emotional functioning through life. Humanoid robots must learn in the same way.

Until now, humanoid robots have required an input signal such as speech or music. The response follows several steps; identifying the representation of motions, mappings between meanings and motions, selection of relevant motions, synchronisation of motion sequences to the input signal, and stability of the motion sequences. At the backend, these parameterised motions need to be defined, generated from a small core motion library and synchronised to different input signals. Labelling helps assign meaning to the motions and maps motions autonomously using motion features.

Eva is an evolved version since the emotions this robot can express are too complex to be governed by pre-defined rules. Eva’s expressions are programmed using AI that enables facial movements. The robot relies on deep learning AI to “read” and then mimics the expressions of nearby humans. Eva improves by trial and error through watching videos of herself. First, the robot’s brain needed to learn to use its complex system of mechanical muscles to generate facial expressions. After that, it had to learn how to read emotions on the faces of humans and then mimic their expressions.

The team filmed themselves making faces for hours and used the footage to train Eva’s internal neural network pair muscle motion to create the same expressions. A second network was then used to match its own self-image with that of a human face, and following several adjustments, EVA was able to read human face gestures from a camera and then respond by mimicking the same facial expression.

Human-robot interaction

Human–robot interaction research is still relatively new in comparison to traditional service robotics where robots deliver hospital meals or provide security services, application domains that require relatively minimal interaction. If robots are to play a meaningful role in education and assisted living facilities, realism of appearance and movement is important.

In elder care, social robots can help perform small tasks like fetching food and water, providing entertainment through games, reminding them of appointments and providing social engagement. But elders aren’t likely to accept their robotic care team unless there is a likeness to the real thing. Visually consistent, realistic appearance and movement seem to increase the sensory intensity of the experience. According to scientists running trials on robot-assisted therapy, if the human being can be affected by the interaction and vice versa, then each partner is more invested in how the interaction unfolds, creating engagement and emotive connection.

Social robots have demonstrated that they can play a significant role in alleviating loneliness and social withdrawal among other mental health concerns. Earlier last year, as part of a global study known as CARESSES (short for Culture-Aware Robots and Environmental Sensor Systems for Elderly Support), jointly funded by the European Union and the Japanese government, investigated the use of robots in caring for the elderly. Pepper, a “culturally competent robot” was tested on care home residents in Britain and Japan. Researchers found that those who interacted with it for up to 18 hours over the course of two weeks saw a significant improvement in their mental health.

In 2012, the Emote project (EMbOdied-perceptive Tutors for Empathy-based learning) began to explore empathy in virtual and robotic tutors. The project ended in March 2016. Here, robot tutors responded to learners and tweaked their model to offer new and exciting approaches to learning.

It used the concept of emotional design by enhancing user experience and inducing user emotional resonance. In emotional interaction design, the machine needs to capture the user’s key information in real time, and recognise the user’s emotional state. Based on this background, this research uses a deep learning mechanism for more accurate and effective emotion recognition, thereby optimising the design of the interactive system and improving the user experience. Such social robots have particular applications in working with children with special needs.

Social robots are increasingly demonstrating effectiveness as low-intensity behaviour change agents.

If you liked reading this, you might like our other stories

Using AI effectively
The Tale of Facial Recognition Technology