The Truth about AI - The Royal Institution's Christmas Lectures 2023
Just before Christmas, Vian Bakir recorded a disinformation and AI section for The Royal Institution's Christmas Lectures on The Truth about AI, in conversation with University of Oxford's Prof. Mike Woolridge, livecast with a studio audience of teenagers. With 2024 set to be the year of elections worldwide (a lot are happening), she was asked to reflect on what she saw as the key issue with disinformation, AI and politics. Below she provides some reflections on this.
How has AI changed things? Hasn’t political advertising always existed?
What’s new is micro-targeting. A microtargeted audience receives a message tailored to one or several specific characteristics perceived by the advertiser as important in making the audience member susceptible to that message. In recent years, AI-powered adtech and recommender systems on social media platforms have become capable of learning about users from their vast data trails online. This allows people to be micro-targeted with personalised messages that are designed to resonate with the issues that that user is interested in. The messages are crafted – often with the help of AI - in a manner that is most likely to engage that user (and people who share those characteristics of the user that are deemed relevant), tapping into their hopes and fears. AI is then used to deliver those messages at a time when the user is deemed most persuadable. This is very different to the past, where the same message was delivered to mass audiences through mass media at same time.
Why is micro-targeting important during elections?
Micro-targeting impacts political mobilisation. It can be democratically beneficial, but also democratically harmful.
On the beneficial side, micro-targeting reaches social groups that are traditionally hard to reach and often don’t vote (e.g. young adults). It also increases knowledge among voters about individually relevant issues, and it increases the efficiency of political parties’ campaigns.
Perhaps the key democratic harm is being micro-targeted with disinformation – namely, information that has been conceived deliberately to deceive the recipient. As only the micro-targeted users see the disinformation, it is difficult for society to know who has been exposed to the disinformation or how widespread it is. This also makes it hard for society (journalists, fact-checkers, the political opposition) to fact-check and rebut the disinformation.
That disinformation could even contain messages designed to suppress certain sorts of voters from turning out to vote – as has been reported in the USA. For instance, in the 2016 US presidential campaign, candidate Donald Trump’s digital campaign used FaceBook’s Lookalike Audiences ad tool to identify voters who weren’t Trump supporters, to target them with psychographic, personalised negative messages to discourage them from voting. It was reportedly aimed at 3 targeted groups: idealistic White liberals, young women and African Americans. There was also attempted targeted suppression of Spanish-language-dominant voters in the 2020 US presidential elections, with disinformation about basic voting details and messaging intended to intimidate such voters. In the UK, during 2016 ‘Brexit’ Referendum campaign, Cambridge Analytica pitched to Leave.EU (an unofficial campaign group) to choose their company for data analytics. Part of this pitch offered voter suppression.
Do we care about these democratic harms?
Studies show that we don’t want to be micro-targeted with manipulative information. Surveys in the UK, Germany and France show that a majority is concerned about the potential impact of deepfake technology and profiling technologies on elections. A survey in the UK shows that a majority is concerned about the potential for being politically manipulated via AI systems using data about our emotions.
Where can I find out more?
To see this segment on AI and disinformation, watch lecture 3. For UK audiences, this is freely available on iplayer; original broadcast BBC4 28th December 2023, 8pm. For international audiences, this is freely available on The Royal Institution's Youtube channel.
For a deep dive into these issues, why they matter, and what can be done about them, you can check out the underpinning book, Optimising Emotions, Incubating Falsehoods (free to download).