UK Parliament still wants to know how to address false information online. We say, dial down emotional contagion!

False information continues to proliferate online, despite years of multi-stakeholder efforts to quell it. In September 2022, we were invited by the UK Parliament’s Online Harms and Disinformation Sub-Committee (part of the Digital, Culture, Media and Sport Committee) to provide evidence to their Inquiry into Misinformation and Trusted Voices.

We addressed one of the Inquiry’s questions: namely: Is the provision of authoritative information responsive enough to meet the challenge of misinformation that is spread on social media?

Through a multi-disciplinary literature review, as well as reflecting on the work we conduct at the EmotionalAI Lab, our submission concludes that any solution considered for countering the spread of false information online (including the solution of providing authoritative information) should be mindful of the many types of actors in play: Audiences are diverse, and studies show that what is perceived as ‘authoritative information’ depends on factors such as the specific audience member’s political leanings, and trust in the communicators, in the media outlet, and in ‘the system’.

Any solution should also be mindful of the communicative processes in play, be these philosophical (e.g. the rise of relativism), epistemological (e.g. claims that we now live in a ‘post-truth’ world where appeals to opinions and emotion matter more than facts), cultural (e.g. the  decline of trust in key institutions), political (e.g. the impact on trust from ruling cultures of spin, deception, bullshit and corruption), economic (e.g. good journalism and fact-checking is expensive to produce, but people are unwilling to pay for news, which damages the quality of the news product), and psychological (with experiments indicating that debunking false claims is minimally effective). 

We conclude that providing authoritative information to address the spread of false information online:

- Won’t sway people who already don’t trust that information (content, source, or channel), or who spread false information in order to express their group identity or dissatisfaction with the political system;

- Could prove useful for the undecided or confused if presented in an understandable fashion through trusted routes. 

As we can see, addressing false information by providing true information turns out to be very tricky!

We conclude that rather than having to make difficult content moderation decisions about what is true and false on the fly and at scale, it may be better to ensure that digital platforms’ algorithms optimise emotions for social good rather than just for the platform and its advertisers’ profit. What this social good optimisation would look like is worthy of further study, but we posit that this would likely involve dialling down the platform’s emotional contagion, and engagement, of users. To know more about how and why we reach this conclusion, you can read our forthcoming book, Optimising Emotions, Incubating Falsehoods.

 

Vian Bakir