Charles Darwin regarded facial expressions as the windows to animals’ as well as human’s emotions. However, making sense of what’s written on our faces has never been an easy task. Recently, a group of researchers at the Max Planck Institute of Neurobiology in Germany had successfully used a machine learning algorithm to decode what appears to be unreadable facial expressions of laboratory mice.

How AI facilitate the recognition of emotions in animals

The three-year long study believed would bring us closer to understanding which neurons are responsible for encoding a particular kind of expressions and how they are related to the manifestations of emotions in our brains. In fact, this study was inspired by a paper published in Cell back in 2014. Back then, a separate group of scientists from the California Institute of Technology concluded emotions correspond with the strength of stimuli that elicited them and they will linger even after the stimuli are gone.

As such, the Max Planck Institute of Neurobiology research team built on the notion and fed laboratory mice with sweet or bitter fluid to bring about the feeling of pleasure or disgust. Researchers then video-recorded the facial movements of these mice. Researchers know that mice expressed emotions by moving different parts of their faces but they are not able to relate exactly which movement associates with what kind of emotions.

What they did was to dissect the videos into snapshots of facial expressions that the mice demonstrated while reacting to different stimuli. Machine learning algorithm was used to objectively and quantitatively categorize these facial expressions at millisecond time frame. Eventually, researchers located the neural circuits that are likely to trigger certain emotions and employs a technique called two-photon calcium imaging to highlight the neurons that will fire when particular facial expressions are expressed.

Why this cannot be repeated in human being?

Unfortunately, using AI to analyze gestures and facial expressions of human being is less straightforward and more controversial. That’s why commercial tools used to screen job applicants, catch potential deceits in courtrooms and so on are often questioned for possible discrimination and injustice. One of the reasons being unlike what US psychologist Paul Ekman proposed in the 1960s-1970s, emotional expressions are not truly universal and most of time, we may not be able to confidently infer one’s emotional states from their facial expressions.

One of the most notable problems with Ekman’s proposition is human, unlike animals, our facial expressions are more socially adjusted and there can be considerable variations even if the six primary emotions are acting as baseline. One may fake their emotions or experience feelings without demonstrating it out loud on their faces. With that, facial expressions become impotent in showing or hiding one’s real emotions, or at the very least, there is no evident link.

Editors and authors of journal – Psychological Science in the Public Interest had once spent two and a half year, trying to find the link between facial expressions and emotions by going through data from around 1000 peer-reviewed papers, but they failed. Besides, some believe depending on face itself will not tell use the whole story about human emotions, because we rely on our body movements, our physiological self and personality to deduce and display emotions.

Most importantly, emotions should not be singularly defined. For example, happiness itself can encompass many different things ranging from pleasure, to pride and joy. Unless an AI algorithm can capture all these varieties and bear in mind the subjectivity behind each of them, what it’s doing right now is just taking things at its literal face value.

*

Author Bio

Hazel Tang A science writer with data background and an interest in the current affair, culture, and arts; a no-med from an (almost) all-med family. Follow on Twitter.