Histories Of...

Emotion Science Keeps Getting More Complicated. Can AI Keep Up?

We may never be able to build a machine that can recognize the full diversity of human emotional experience

10 min read

Grid of photographs of faces with emojis superimposed over them
How We Get To Next logo

In the early 1990s, Lisa Feldman Barrett had a problem. She was running an experiment to investigate how emotions affect self-perception, but her results seemed to be consistently wrong. Eight times in a row, in fact.

She was studying for a Ph.D. in the psychology of the self at the University of Waterloo, Ontario, Canada. As part of her research, she tested some of the textbook assumptions that she had been taught, including the assumption that people feel anxiety or depression when, despite living up to their own expectations, they do not live up to the expectations of others.

But after designing and running her experiment, she discovered that her test subjects weren’t distinguishing between anxiety and depression. They weren’t differentiating between fear and sadness, either. According to the model that dominated emotion science at the time, there were six distinct basic emotions–fear, sadness, anger, happiness, surprise, and disgust–and everyone in the world was able to not only experience them, but distinguish clearly between them. This had been conclusively demonstrated 20 years before by Paul Ekman, the psychologist who had traversed the world and found seemingly concrete evidence that everyone, everywhere, felt those six emotions, and that they were reflected in universal, and distinct, facial expressions.

For Barrett, it didn’t make sense that her test subjects were experiencing emotions in a way that didn’t fit this model. She wondered if perhaps these people were making mistakes; if only she could find out which people were getting their emotions “right” and which were getting them “wrong,” she could teach the mistaken participants how to better understand their own selves. She even blamed herself for developing a faulty experiment, and thought about becoming a clinical psychiatrist–leaving the research to people who could get the “proper” results.

But she became sure that she hadn’t made an error. And the more she looked through her data, the more she realized that it wasn’t her mistake–it was Ekman’s. The six universal emotions didn’t exist. She soon found other studies–conducted in the lab as well as out in the field–that also indicated Ekman’s model was wrong.

Barrett found herself asking the question: If the definition of emotions set forth by Ekman and others is incorrect, then what are emotions? This isn’t just an academic exercise. It’s central to the question of whether we’ll ever build an artificial intelligence that experiences emotions just like we do. And part of the solution depends exactly on whose experiences you’re talking about.

One example of just how our model of emotions has changed thanks to Barrett’s work can be seen in a new game, The Vault. The game is a time-traveling puzzle, in which the player wanders through different historical scenarios and solves challenges in order to progress–and the solutions come from understanding how certain emotions were experienced in each section’s period of history. (Full disclosure: The Centre for the History of Emotions at Queen Mary University of London, which co-developed the game, is where I work.)

The game’s premise is that emotions are not static or universal, but instead change over time. It tries to immerse the player in unusual, unfamiliar sets of feelings–like acedia, caused by a disconnection with God or the universe. There’s also another low feeling, melancholia, which is marked by sensations of the body being filled with a horrible black bile, and involves an experience of bodily distortion, such as believing one’s legs are made of glass.

For me, the interesting thing about this game is that no matter how well I learn to understand past emotions, I’m not sure I’ll ever be able to really “feel” these emotions in the same intuitive way that people did in the past. This raises a question: If we humans are unable to fully experience some emotions that were undoubtedly real, like melancholia, purely because of the historical context in which we live, will machines ever be able to really feel anything at all?

This doesn’t just apply to historical emotions, either. Ekman may have documented hundreds of facial expressions from different cultures around the world, but his six basic emotions (inspired by the work of Charles Darwin) came from a small set of American faces, which Ekman then imposed as a framework over the expressions seen elsewhere in the rest of the world. There’s a bias embedded here. Why does he, or anyone else, get to reduce the diversity of human facial expressions, tones of voice, and other behaviors down to any kind of shortlist?

In the 17th century, the standardization of many European languages was often shaped by the arbitrary choices of book printers, who picked one way of spelling a word over another–this, in turn, shaped cultural expression. Emotion science faces the same problem. Choosing localized emotional variants as if they’re going to be the same for thousands of people, or even millions, opens the way to a self-fulfilling standardization of emotions, rather than to the understanding and appreciation of what we already have. Psychologists have this kind of problem all the time with the “WEIRDs.” That term refers to white, educated people from industrialized, rich, democratic countries–in other words, the typical North American or European undergraduate studying psychology, who also happens to be the typical volunteer for a psychological study. This research bias undermines any quest for universal human traits from the start, and the same cultural blind spots are notorious in the tech industry.

One of the architects of The Vault, Thomas Dixon–a historian of emotions who pretty much wrote the standard text on the subject–told me that, just as “each culture (and each individual) [has] their own different repertoire of feelings,” there should be “no reason why an AI machine should not be able to learn those patterns.” However, Barrett also argues that faces, voices, and behaviors associated with emotions could well change not only from culture to culture but subtly from person to person. Are we ever going to be able to build a machine that can recognize the full diversity of human emotional experience? Or are we going to build one that only recognizes a small sliver of that diversity, and so forces its users to change their behavior to match?

We could take a huge liberty here, and assume that eventually AI could be programmed free of these cultural biases–but with the benefit of years more research, Barrett has concluded that emotions are much more complicated than faces and voices. She, along with James Russell, helped develop a more nuanced system than Ekman’s, the “psychological construction of emotions” model. It posits that emotions happen when the brain takes a number of factors–internal feelings, what’s going on in the outside world, what individuals have learned throughout their lives from family and culture, and so on–and “constructs” an emotion by processing all of those psychological components simultaneously in parallel.

Similarly, the brain recognizes emotions in other people by observing both others’ bodily or facial movements and the contexts in which people make those gestures. When it comes to the question of whether any AI can ever experience emotions, it’s that factor–understanding all of these broader cultural contexts–that proves the undoing of the machines.

For example, take this picture of my face and arm:

Richard Firth-Godbehere looking angry or possibly excited

Do I have road rage? Or am I celebrating hearing on the radio that my team has scored?

Even humans struggle at figuring this kind of thing out. If we want to put emotion-detecting AI in self-driving cars that can take control from a human and pull over when they detect road rage, my celebratory fist pump (not that my team scores that often) could mean I end up spending time stuck on the side of the street getting angrier and angrier at my annoyingly intelligent car.

To avoid roads jammed with furious drivers, the ability of emotion-processing AI to understand context is essential–and context requires knowing something emotion scientists call value. This is the meaning that we construct about the world around us. If you see me pumping my fist, my value to you depends on whether you thought I was violent (in which case my value is as a threat); whether you knew of my fabled physical cowardice (in which case my value to you is as a joke); whether you were watching the game with me and supported the same team (in which case my value to you is as a friend) or a different team (in which case my value to you is, possibly, as an enemy).

But understanding value isn’t as easy as building a database of all these different factors. We don’t construct databases of memory and emotion, either. As Barrett explains, “Brains don’t work like a file system. Memories are dynamically constructed in the moment, and brains have an amazing capacity to combine bits and pieces of the past in novel ways.” Our brains seem to use disparate feelings and memories to build a framework of categories for understanding context; these categories are then filtered and distorted in ways that help us react appropriately to new experiences. This is one of the key reasons why eyewitness testimony is often unreliable in court, and cross-examination is an essential part of the legal process.

For a machine to understand the emotions I’m feeling during my fist pump, it needs to contextualize all of this: memories of fist pumps and scowling faces; memories of what a car is; memories of different sports; reactions to sports; memories of how rarely my team scores; how I feel about my team; an analysis of my driving; an understanding that those are tears of joy, not sorrow (or anger); and so on. The sum total of this mishmash? An emotion.

Machines may remember things perfectly, but the human emotional process only works because it’s so fuzzy–it makes cognitive dissonance possible. A machine might have to think that a scowl and a fist is a threat but simultaneously know that I’m not a violent person. Which information should it react to? The brain can do this with hundreds of bits of conflicting data, and we usually end up able to handle new contexts that might seem logically impossible to an AI. My car might not just pull over–it could shut down completely in the middle of the highway. Then I’d be really angry.

The idea that human memories aren’t simply recording devices but are a categorical system that helps us thrive and survive is known as “dynamic categorization.” It’s a model that’s now taken as a given within the field of psychology, and researchers in other nonscientific fields, such as history, also use it. Thus, forcing an AI to access memories chosen for it would still impose a narrow set of WEIRD-like values onto this hypothetical feeling machine, but developing an AI that can create its own humanlike memories and values as it learns would avoid this pitfall. However, as far I’m aware, there don’t appear to be AI or computer systems in development that use dynamic categorization when thinking about memory, let alone emotion detection.

Let’s say we solve the two major problems I’ve raised. Our AI can recognize faces, voices, and behavior, and it uses dynamic categorization to store and recall information. What we’ve built is a machine that only recognizes emotions. It’s a metal psychopath–it can’t empathize with me. I, for one, don’t like the idea of being driven around by one of those machines.

Our final step in building a feeling machine is to introduce feelings. The ability to understand the value of the world around an organism did not evolve separately from the senses that let us know about the world around that organism. Without feelings of revulsion caused by smell and taste, we all would have died from eating rotten food long ago. Without hunger, we’d starve. Without desire, we wouldn’t reproduce. Without panic, we might run toward the saber-toothed tigers, not away from them. And, just as the model of the five traditional senses is nowhere near comprehensive in terms of the sense abilities that humans actually have, we also have a wide and diverse range of internal feelings, known in psychology as affects.

Affects aren’t emotions, but rather judgments of value that create pleasant or unpleasant sensations in the body, either making us excited or calming us down. Affects help us to evaluate context, telling us whether it’s a dog to trust (and domesticate) or a tiger to flee. To feel affects, our AI needs one more thing: a body. Affects can’t exist without one. As Barrett has argued, “A disembodied brain has no bodily systems to balance, it has no bodily sensations to make sense of. A disembodied brain would not experience emotion.”

A feeling machine’s body doesn’t have to be a Blade Runner-style flesh-and-blood replica of a human. It could be a virtual body, built entirely from lines of code. Unfortunately, most AI developers who imbue their creations with an understanding of emotions–even those who connect their creation to a body of some kind–are building machines that, at best, react in simple ways to basic stimuli like sight, sound, and pressure. More, much more, is needed to truly create a feeling machine.

That said, building such an AI would be an effective way to figure out what that “much more” is. Right now, we have only a few clunky options to test our ideas about emotion on humans: we can get people to take a survey; we can put them in loud, claustrophobia-inducing metal tubes and ask them to “feel emotions as they would naturally”; or we can study the effects of physical alterations of the brain, whether due to an accident or as a side effect of surgery.

When it comes to seeing emotions directly, we’re still fumbling in the dark. A feeling machine could turn on the light.

We can imagine that we’ve cracked it. We’ve done the experiments and we’ve created a machine that can experience affects, read context, and understand value–and all those abilities have been synced up perfectly in order to construct emotions. Plus, we’ve managed not to imbue our creation with our own cultural biases. We have an emotional, feeling machine. There’s one final issue.

Creating a machine that experiences emotions doesn’t tell us if we have a machine that feels emotions in the same way we do. It might act as if it does, it might say that it does, but can we ever truly know that it does?

Let’s go back to that example of the self-driving, road rage-detecting car. Sure, it might understand why I’m scowling and raising my fist, and may empathize with me–it understands the value of my gesture. But what we’ve got here is just a kind of clockwork approximation of something that feels empathy, and we still don’t fully understand how all the gears mesh together to “feel” an emotion. For all we know, that ability to feel may still prove impossible to artificially induce.

Perhaps that doesn’t matter. After all, I can’t even be sure if you, the human reader of this piece (and not the many machine readers that are also indexing it for search engines, although, hello, how are you?), really feel emotions in the way that I do. I can’t climb inside your brain. Barrett argues that emotions depend not only on a human mind’s perception of its own affects, contexts, and values, but also on how those perceptions work “in conjunction with other human minds.” If we–or other AI minds, even–think that a machine can feel, is that enough?

The question then is: Should we judge something based on how it experiences emotions, or whether it experiences them at all? We treat animals differently based on intelligence, for example, but where does emotion come into play when trying to draw the line between a “machine” and a creation that is “artificial, but alive”?

The distinction between an AI that only appears terrified of death and an AI that is genuinely afraid is an important one if we ever have to turn the machine off. Can we even make that decision if there’s ambiguity? The philosopher Chidi Anagonye, a character on the sitcom The Good Place, faced just this dilemma:

Ultimately, whether or not we produce machines with emotions may depend not on the skills of scientists but on the ethical objections of ordinary people. I have few suggestions on this front–but if I were contemplating a career in philosophy right now, I’d be thinking about making my central field the ethics of emotion AI. There’s much to be done.

spacer
How We Get To Next logo

How We Get To Next was a magazine that explored the future of science, technology, and culture from 2014 to 2019. This article is part of our Histories of”¦ section, which looks at stories of innovation from the past. Click the logo to read more.