Emotional AI and Civil Rights: The Salutary Case of Facebook

Last month saw the publication of Facebook’s Civil Rights Audit – Final Report. From the perspective of our emotional AI projects, this audit is well worth reading. We consider Facebook to be a significant emotional AI company that harms the world’s civic body when it enables propagation of hate speech and affective disinformation, as well as deception, voter suppression and other civic ills. Those paying attention to the emotional AI sector, may be more used to hearing about companies like Affectiva and Realeyes that utilise computer vision and Ekman’s ‘basic emotions’ typology.  However, some of the most influential companies on the planet have long been interested in AI and emotion.

 

Facebook and Emotional AI

Facebook qualifies as an emotional AI company in that it uses AI to gauge and react to users’ expressed emotions online, in the name of increasing user engagement. Facebook is designed to promote items that generate strong reactions, regardless of whether these are positive or negative (Nadler et al. 2018). Since 2009, Facebook has incrementally introduced design features that enable it to collect and manipulate emotional data to fuel its advertising-based business model. For instance, in 2009 it introduced the ‘Like’ Button. After testing various graphic means  to enable its users to tag content with emotional data, by 2016 Facebook’s Reaction Icons drew on popular emoji faces (Stark 2018). Alongside such design innovations, Facebook experiments on its users. For instance, it infamously secretly experimented on its users’ News Feeds in 2012 to study emotional contagion, finding that emotions are, indeed, contagious on Facebook (Kramer et al. 2014). In May 2017, a leaked document from Facebook’s Australian division suggested that Facebook had offered advertisers the ability to micro-target ads to teenagers based on real-time extrapolation of their mood during moments of psychological vulnerability, such as when they felt ‘worthless’, ‘insecure’, ‘defeated’ and ‘anxious’ (Tiku 2017, McStay 2018). 

Facebook’s intentions, and ability, to target and manipulate emotions is of global political and civic concern, given the popularity of Facebook. Scholarship has long highlighted the importance of emotions in public and political discussion; in the construction of collective identities and social bonds; and in the engagement and mobilisation of voters and social movements (Richards 2007, Brader and Wayne 2016).

 

Facebook’s Civil Rights Audit

At the behest of civil rights organisations and members of the US Congress, who recognised the need to make sure important civil rights laws and principles are respected and robustly incorporated into Facebook’s work, Facebook committed in 2018 to an internal civil rights audit of the core Facebook app in the USA. In July 2020, the civil rights auditors produced a final report that finds some progress, but also many significant failings and setbacks.

On a positive note, the audit concludes that ‘Facebook is in a different place than it was two years ago — some teams of employees are asking questions about civil rights issues and implications before launching policies and products’ (p. 6).  These include many changes, of which perhaps the most significant, given Facebook’s business model, is a commitment to a new advertising system, ‘so advertisers running US housing, employment, and credit ads will no longer be allowed to target by age, gender, or zip code — and Facebook agreed to a much smaller set of targeting categories overall’ (p. 6). This is an important step given that in numerous previous legal filings across four civil rights lawsuits, Facebook had tried to place itself beyond the reach of civil rights laws, claiming immunity under Section 230 of the US Communications Decency Act.

Other significant changes include the following:

-       Investing in a team to focus on studying responsible AI methodologies and building stronger internal systems to address algorithmic bias.

-       Implementing significant changes to privacy policies and systems as a result of the Federal Trade Commission settlement that includes a privacy review of every new or modified product, service or practice before it is implemented.

-       Expanding voter suppression policies and creating a robust census interference policy. In 2018, when the audit began, Facebook had a limited voter suppression policy in place. At the urging of the audit, the policy is now more expansive. In September 2019, Facebook launched a ‘Don’t Vote Ads Policy’ and a ‘Don’t Participate in Census Ads Policy’ to prohibit ads targeting US users with messages encouraging people not to vote or not to participate in the census.

-       Expanding its Inflammatory Ads policy (in June 2020) to also prohibit ads that state that people represent a ‘threat to physical safety, health or survival’ based on their race, ethnicity, national origin, religious affiliation, sexual orientation, caste, sex, gender, gender identity, serious disease, disability, or immigration status. Content that would be prohibited under this policy include claims that a racial group wants to ‘destroy us from within’ or that an immigrant group ‘is infested with disease and therefore a threat to health and survival of our community’. However, the policy still allows advertisers to run ads that depict minority groups as a threat to things like our culture or values (e.g. claiming a religious group poses a threat to the ‘American way of life’). As part of the same policy update, Facebook will also prohibit ads with statements of inferiority, expressions of contempt, disgust or dismissal and cursing when directed at immigrants, asylum seekers, migrants, or refugees.

 

Do These Changes Go Far Enough and Fast Enough?

There are two key unknowns. Will the changes work in limiting the spread of affective disinformation in the USA? And will the changes be rolled out to other countries?

It remains to be seen if these changes will significantly alter the spread of affective disinformation on Facebook in the USA. For instance, Facebook’s civil rights audit expresses concern that the new census interference policies (that cover misrepresentations of how and when to fill out the census, and dangerous and intimidating forms of suppression targeted at specific communities) are too slow to be effective. Facebook’s new census interference policies are supported by proactive detection technology and human review, where violating content is removed regardless of who posts it. However, Facebook took over 24 hours (which is slow in digital contagion terms) to remove the deceptive targeted adverts posted in March 2020 by the Trump Campaign. The adverts were deceptive in that they state: ‘President Trump needs you to take the Official 2020 Congressional District Census today’.  Clicking the link takes users to a page on the Donald Trump website where they were asked to complete a survey focusing on Republican talking points such as ‘Obamacare’ and ‘the Democrats' failed Impeachment Witch Hunt’. Those who fill out the survey are sent to a web page calling for donations. These deceptive census ads were paid for by the Trump Make America Great Again Committee (part of Trump's official re-election fundraising efforts) (Gerken 2020). Sowing confusion about when to vote in the census has negative implications for the health of the civic body, as the census is used to determine the number of seats each state holds in the House of Representatives, as well as how federal funding is allocated for the next 10 years. It took Facebook over 24 hours to conclude that the ad did, in fact, violate its new policy and should be removed. This delay arose because of the difficulty of anticipating in advance all the different ways census interference or suppression content could manifest.

Alongside the complexities of recognising affective disinformation, Facebook’s civil rights changes only apply to the US Facebook app. Yet, Facebook has more than 2.45 billion monthly active users globally, with the majority of users outside the USA or Canada. Many of these countries have weak democratic traditions and weak public spheres, making Facebook’s influence more pronounced, especially in countries where Facebook has entered into partnership with phone carriers to make a small number of stripped-down web services (including Facebook) available for free through an app. First launched in 2013, and renamed Free Basics in 2015, by 2018 it had been taken up by 60 countries.

Unfortunately, this attempt by Facebook to connect the world has led to many citizens in poorer countries being entirely reliant on Facebook for access to information, eschewing all paid-for content (such as reputable news outlets). These include Myanmar where, in March 2018, the United Nations called out Facebook for its role in allowing hateful posts that helped amplify ethnic tensions and incited violence in Myanmar, leading to hundreds of thousands of Rohingya Muslims fleeing to escape the effects of genocidal hate speech against them. It also includes the Philippines where Rodrigo Duterte hired trolls to spread propaganda for his (successful) presidential candidacy during the 2016 election, many of which were retained to amplify messages supporting his policies while in power, and to cast dissent as destabilisation. Indeed, in a recent survey of Commonwealth countries, Brown et al. (2020) find that, of the 25 countries that responded, by far the greatest proportion of reported cases of electoral misinformation on social media platforms is found on Facebook (>90%) and its service Whatsapp (>40%).  

 

The Future for Emotional AI Companies: ‘Do No Harm to the Civic Body’

Although currently globally popular and hence influential, Facebook is not the only emotional AI company. Rather, emotional AI companies are rapidly proliferating, across all domains, and collecting increasingly rich streams of data about our emotions (McStay 2017, McStay 2018). For Facebook, putting civil rights at the heart of the company’s design decisions has been forced upon it by a succession of highly negative events that have harmed the civic body across the world. Rather than falling into this reactive stance, we recommend that all emotional AI companies should embrace civil rights in their product design from the start, before their services become popular, and before they displace and damage fragile civic institutions, norms and practices.  Rather than ‘Move Fast and Break Things’, we suggest that the guiding mantra should be ‘Do No Harm to the Civic Body’.

Screenshot 2020-08-03 at 14.53.36.png
Vian Bakir