Discover the stupidity of AI emotion recognition with this little browser game.

Tech companies not only want to identify you through facial recognition, they also want to read your emotions with the help of AI. Yet for many scientists, claims about computers’ ability to understand emotions are fundamentally flawed, and a little browser-based web game created by researchers at the University of Cambridge aims to show why.

Head over to and you can see how your computer “reads” your emotions through your webcam. The game will challenge you to produce six different emotions (happiness, sadness, fear, surprise, disgust and anger), which the AI ​​will try to identify. However, you will likely find that the software’s readings are far from accurate, often interpreting even exaggerated expressions as “neutral.” And even when you produce a smile that convinces your computer that you are happy, it will know you were faking it.

This is the goal of the site, says creator Alexa Hagerty, a researcher at the Leverhulme Center for the Future of Intelligence at the University of Cambridge and the Center for the Study of Existential Risk: to demonstrate that the basic premise that underlies much of the emotion recognition technology, is that facial movements are intrinsically linked to changes in feelings, is flawed.

“The premise of these technologies is that our faces and inner feelings are correlated in a very predictable way,” says Hagerty. The edge. “If I smile, I am happy. If I frown, I’m angry. But the APA did this big review of the evidence in 2019 and they found that people’s emotional space cannot be easily inferred from their facial movements. “In the game, says Hagerty,” you have the opportunity to move your face quickly to personify six different emotions, but the point is, you didn’t feel six different things internally, one after another in a row. “

A second minigame on the site emphasizes this point by asking users to identify the difference between a wink and a blink, something machines cannot do. “You can close your eyes and it can be an involuntary action or a significant gesture,” says Hagerty.

Despite these problems, emotion recognition technology is rapidly gaining ground, and companies promise that such systems can be used to screen job candidates (by giving them an “employability score”), detect potential terrorists, or assess whether commercial drivers are sleepy or sleepy. . (Amazon is even rolling out similar technology in its own trucks.)

Of course, humans also make mistakes when we read the emotions on people’s faces, but handing this work over to machines comes with specific downsides. For one thing, machines cannot read other social cues like humans (as is the case with the wink / blink dichotomy). Machines also often make automated decisions that humans cannot question and can perform large-scale surveillance without us knowing. Also, as with facial recognition systems, emotion-sensing AI is often racially biased – for example, it more often assesses the faces of black people as displaying negative emotions. All of these factors make AI emotion detection much more of a concern than humans’ ability to read the feelings of others.

“The dangers are manifold,” says Hagerty. “With the lack of human communication, we have many options to correct that. But once you are automating something or the reading is done without your knowledge or extension, those options disappear. “

Source link