Emotional AI & the Optimization of Disinformation

Thanks to Megan Boler for organising an international book launch of her edited book, Affective Politics of Digital Media.

Our chapter, ‘Empathic Media, Emotional AI, and the Optimization of Disinformation’, reflects upon the intersections between disinformation and the use of Emotional AI systems that claim to read, judge and react to emotions, affects and other subjective states, through text, voice, computer vision and biometric sensing.

 

A Brief Sociological History of Emotions

To address how emotion and media are being ‘weaponised’ in political contexts today, it is vital to have historical perspective.  David Hume for example in A Treatise of Human Nature (1739) spoke of what today we call “emotional contagion” in terms of affections passing from one to another, providing a framing for emotion as that which passes from one to another. Emile Durkheim likewise in Division of Labour in Society (1893), gave a slightly more fizzy definition, phrasing it as ‘collective effervescence’, speaking in what today we phrase in terms of amplification and positive feedback loops. Likewise, Gustave Le Bon’s The Crowd (1896) spoke of emotional contagion as undermining individual rational thought, exaggeration of sentiment, impulsiveness, force, destruction, and absence of critical spirit.

These early sociologies help understand perceived dangers of collective emotion, and modern concerns about the so-called balance between emotion and rationality (a false opposition).

 

Social Media are Emotional AI

Emotional AI are an emergent phenomenon across diverse sectors, but social media are the prime site of emotional AI today. Many studies have documented that anti-social and uncivil behaviour proliferate online, as do positive emotions, and that emotions are socially contagious online. Social media are designed this way – to maximise and spread strong reactions, emotions and engagement from users. This is evidenced in studies on Facebook (USA), Twitter (USA, Germany) and Weibo (China).

 In our chapter, we outline the ‘economics of emotion’ central to the appeal of fake news. This examines how emotions are leveraged online to generate attention and viewing time, which converts to advertising revenue from online behavioural advertising. 100% fake news websites and their sensationalist content (that also accords with users’ preconceived ideas) acts as clickbait.

 We also outline the ‘politics of emotion’ – a core driver of political propaganda online. In political campaigns across the world, we see increased use of data analytics and data management approaches to profile and identify target audiences, including ‘persuadables’, to target them with messages that stoke polarisation and conflict. Studies show this at play in the US 2016 presidential election, the UK’s 2016  ‘Brexit’ Referendum on leaving the European Union, the Catalan referendum for independence in October 2017,  and the campaigning tactics of Jair Bolsonaro to become Brazil’s president in 2018. Even in Africa, where internet penetration is much smaller (but mobile phone usage is high), we see evidence of manipulation on social media for political gain.

Whether for commerce or propaganda, the rise of emotional AI allows even more granular targeting of the emotional state of individuals and groups, with real-time feedback to optimize content for target audiences. For instance, on Facebook, we don’t just ‘Like’ things. After years of testing, in 2016, Facebook rolled out its new Reaction Icons globally, allowing people to give empathetic, quick responses to posts.  If you long-press the Like button, you get an option to use one of five pre-defined emotions, namely ‘Love’, ‘Haha’, ‘Wow’, ‘Sad’, or ‘Angry’. With this data, Facebook can understand users on a deeper and more emotional level, enabling Facebook to personalise what content they show each user, aiming to increase the time users spend online. 

 

A New Dystopia?

Of concern is what happens when datafied emotion spills off-screen? One thing is clear:  there are many people and organisations interested in collecting data about emotion from all sorts of places, be this Spotify, sports watches, in cars, through home voice assistants, in schools, and so on.

In the same way that modern advertising works through on a programmatic model, that allows for a variety of data types to profile people ­and increasingly create automated content, it seems highly likely that biometrics will input into political campaigning and other efforts to exert surreptitious influence. We may soon see biometrically-enabled political campaigning (and fake news), featuring automated content sensitised to histories, interactions, places, situations and bodies.

 

Addendum: Regulators are Rightly Concerned

In Europe, draft regulations for AI (which will be a long time coming, but profound when they arrive) explicitly mention emotion recognition. Council of Europe (Convention 108) is looking for bans on data about emotion in certain contexts. New EU draft AI regulation states that anyone subject to emotion profiling needs to be notified. While far from perfect and riddled with problems, the role of mediated emotion, systems that judge, and AI, are on regulators’ radar. This is a good thing.

 

 

https___cdn.evbuc.com_images_130580201_19369305641_1_original.jpeg
Vian Bakir