Cross-Cultural Conversation on Emotional AI: Japan and UK
Investigators: Prof. Vian Bakir, Dr. Lachlan Urquhart, Prof. Peter Mantello and Prof. Andrew McStay.
Title: Emotional AI: Comparative Considerations for UK and Japan across Commercial, Political and Security Sectors
Full project site with all details: here.
Funded by the ESRC, a UK Research Council, in 2019 we seek to generate conversation among UK and Japanese academics, industry, artists, NGOs and regulators. The goal is to understand the constitutional, commercial, civic, policing and security implications of a world where emotional AI increasingly plays a central role. While the academic part of the project will use insights to inform a subsequent grant application, the overall objective is broader. It is to identify the benefits and harms of these technologies and, central to this project, the cross-cultural considerations. For example, are ethical and privacy considerations the same in the UK and Japan, or do they emerge out of a situated social context? If so, what are the similarities and differences?
We’re holding two seperate workshops dedicated to commercial and security/policing dimensions in Tokyo. These will run from July 8th 2019 and be held at Ritsumeikan Tokyo Campus, Sapia Tower. Following these, we will host another at London’s Digital Catapult on Sept 9th 2019. Interested in coming? Get in touch with Andrew at firstname.lastname@example.org. The workshops are best suited to academics interested in impact of new technologies (comp sci’, normative, legal, etc.) and those working in technology, policy, policing, security, NGOs and electronic art, but do get in touch if this short list does not recognise you. Travel funding available for those within Japan :)
Fake news and emotions
Professor Andrew McStay and Professor Vian Bakir of Bangor University assessed the connection between fake news and emotions. We argue that what is most significant about the contemporary fake news furore is what it portends: the use of personally and emotionally targeted news produced by algo-journalism, empathic media and emotional AI. In assessing solutions to this democratically problematic situation, we recommend that greater attention is paid to the role of digital advertising in causing, and combating, both the contemporary fake news phenomenon, and the near-horizon variant of empathically optimised automated fake news. Written evidence for UK parliamentary inquiry here and our analysis of all parliamentary submissions here. Our academic paper 'Fake News and The Economy of Emotions: Problems, Causes, Solutions' here.
Rights of Childhood: Affective Computing and Data Protection
Investigators: Prof. Andrew McStay and `Dr. Gilad Rosner (Founder: IoT Privacy Forum)
Funded by the HDI+Network and EPSRC, a UK Research Council.
Recently awarded, this funded project (2019-20) is identifying the socio-technical ethical terms by which bodily affective child-oriented Internet of Things (IoT), such as toys, should function. We will assess the state of emotion detection, focusing on technologies, products and services that impact children. This is in order to: understand how they intersect and affect children’s information rights; identify which criteria are required to create consumer trust in emotion-sensitive products; and understand how children and parents can be empowered by technology design and rights. Methods include surveys, focus groups (with Institute of Imagination) and “elite” interviewing.
Funded by the Arts and Humanities Research Council, this project involved: over 100 interviews with industrial, political, security, legal and NGO stakeholders; a UK survey (n=2068); and a workshop at Digital Catapult (UK) with relevant stakeholders to explore scope for ethical guidelines. Overall the research finds that there is overlap between stakeholders on how best to manage the emergence of these technologies, but this is not currently being achieved. It concludes by identifying beneficial uses of these technologies, but also an ethical and regulatory lacuna. Mindful of dangers of regulating early, the report nonetheless recommends regulatory attention. It also urges relevant sectors of the technology industry to recognise that there is self-interest in collective consideration and action regarding negative societal implications of tracking emotional life. See here for academic papers, articles and here for overall project report.
Art and creativity
There is enormous scope for emotional AI to be used for social good. A key domain is art where emotional AI provides scope for new modes of expression and audience engagement.. Andrew McStay has advised and continues to work with electronic artist Ronan Devlin, whose work Aura is currently touring the UK. See images below from exhibitions in Lancaster, Leeds and London. Devlin’s work uses facial coding techniques and maps incoming data in real-time to a colour-wheel. These colours are then expressed through a variety of treatments that visitors can interact with. The Canary Wharf version also has an ‘Easter egg’ built in (if a person is static and expressionless, their faces rises like an apparition to be displayed on a gigantic water fountain). Very cool, Ronan
How to Live Well With Emotional AI 2019 lecture series
As 2019 William Evans Fellow for the University of Otago, Andrew will deliver a series of sessions titled ‘How To Live Well with Emotional AI’, organised by Dr. Roel Wijland and The Brandbach, the creative industries specialisation of the University of Otago. If you’re around, come by! Sessions below.
Session 1. Tuesday May 7 / 13:00 – 14:30, Introducing Empathic Media, Otago Business School / lecture room G.02, Question time 14:30-15:00
Session 2. Thursday May 16 / 12:00 – 13:00, Creativity, Dunedin School of Art / venue P152
Session 3. Friday May 17 / 13:00 – 14:00, Hybrid Ethics, Dpt’s Computer Sciences, Information Sciences & Law
Session 4 (with Prof. Vian Bakir). Tuesday May 21 / 11:00-12:00, Democracy, Disinformation, Fake News, Solutions, Dpt of Film and Media Studies, Tea 12:00-13:00
Link to poster here.
Emotional AI: The Ethical Checklist
There’s much ado about ethics checklists and their critics have a point: ethical posturing may be used as smokescreen to avoid regulatory development. So, is that it for ethics? Is it only law that matters? Have ethics been reduced to PR effluent? We see otherwise. We say that ethics, and indeed targeted checklists, may play a useful role in a balanced diet of technology governance.
For this we draw on Luciano Floridi’s distinction of hard and soft ethics. (The former is morals that do and ought to inform laws and policy; the latter has more to do with ought to be done over and above existing regulation after compliance.) The soft element is reflected in what we understand to be the first ethical checklist designed for those working with AI and data about human emotions.
Unfettered from legal need to speak in technology-neutral terms, this allowed us to suggest practical means of achieving normative ethics. Go on, download the Ethical Checklist, print, read, use and perhaps disagree (shock, horror!): we want to know what you think of this living document. Tweet thoughts to Andy (@digi_ad) and Pam (@paminthelab)!