What is emotional AI?

Emotional AI refers to technologies that use affective computing and artificial intelligence techniques to sense, learn about and interact with human emotional life. It is a weak form of AI in that these technologies aim to read and react to emotions through text, voice, computer vision, biometric sensing and, potentially, information about a person’s context.

While the effectiveness of current methods is highly debatable, we believe that the use of human-state measurement to engage with qualitative dimensions of human life is still in its infancy. Emotional AI, and wider automated human-state measurement, thus requires ongoing social, cultural, legal and ethical scrutiny.

How?

The following techniques are used to try to sense and discern people’s states, emotions and expressions:

Sentiment analysis of online language, emojis, images and video for evidence of moods, feelings and emotions. 

Large language models: for makers of chatbots, these extend and deepen human-system interaction through emphasis on emotional language.

Facial coding of expressions: the effectiveness of this method is highly debatable, especially when based on the “big six” emotions, but it analyses faces from a camera feed, a recorded video file, a video frame stream or a photo to “infer” an emotion.

Voice analytics: includes elements such as the rate of speech, increases and decreases in pauses, and tone.

Eye-tracking: measures gaze, eye position and eye movement.

Wearables sense skin responses, muscle activity, heart activity, skin temperature, respiration and brain activity.

Gesture, behaviour and internal physiology: cameras track hands, faces, external bodily behaviour, and remote heart rate tracking.

Virtual Reality (VR) allows remote viewers to understand and feel-into what the wearer is experiencing. Headwear may also contain EEG and face muscle sensors.

Augmented Reality (AR): remote viewers can track attention, reactions and interaction with digital objects.

  

3675.jpg

Machine-readable human life: OK or not?

Emotional AI promises better experience of services, devices and technologies. However, there are considerations that give cause to mistrust and question the rollout of these technologies. Citizens, researchers, policy-makers and industry should consider the following:

  • Overall, it is it desirable that emotions are machine-readable?

  • Do such technologies make sense given the ambiguous nature of emotion and subjective life?

  • What of different national, cultural, social and historical contexts?

  • What of racial, sex, gender and trans’ bias in computer vision and training data?

  • Will data about emotion be used in a manner that benefits citizens?

  • What of relationships with emotion-sensing objects?

  • Are protections adequate? That is, are laws and regulations appropriate (and are they being followed)?

  • Is the spirit of data protection appropriate? This tends to focus on identification, but is identification the principal issue?

  • Is it we OK that social media companies registering mental states, emotions and moods?

  • What of third-party uses by data brokers?

  • What of use-types? games are one thing, but job opportunities, border controls, education, health insurance… ?

  • What of uses with children?

  • Is law the only answer, what of design?