Automated Empathy – Globalising International Standards (AEGIS): Japan and Ethically Aligned Regions
Funded by the Responsible AI UK Impact Accelerator, project AEGIS sees the Lab partnering with Japan’s National Institute of Informatics, the Institute of Electrical and Electronics Engineers (IEEE), Monash University (Indonesia) and engaging the Information Commissioner's Office (UK).
The goal of AEGIS is to host a series of workshops, assemble a diverse expert working group and develop a ‘technical standard’ to address use of emulated empathy in general-purpose artificial intelligence systems for human-AI partnerships.
Provisionally titled Recommended Practice for Ethical Considerations of Emulated Empathy in Partner-based General-Purpose Artificial Intelligence Systems, this IEEE standard will define ethical considerations, detail good practices, and augment and complement international human rights and regional law.
Use cases encompass general-purpose artificial intelligence products marketed as ‘empathic partners’, ‘personal AI’, ‘co-pilots’, ‘assistants’, and related phrasing for ‘human-AI partnering’. Current and nascent domains of use include work, therapy, education, life coaching, legal problems, fitness, entertainment, and more.
These systems raise ethical questions that are global in nature, yet benefitting from diverse ethical approaches, especially where systems feed into the design of human-centered technologies. Some ethical questions are familiar (e.g. transparency, accountability, bias and fairness), but others are specific and unique, including psychological interactions and dependencies, child appropriateness, fiduciary issues, animism, and manipulation through partnerships with general-purpose artificial intelligence systems.
The project augments the Emotional AI Lab's UK-Japan social science work (here). It also sees global value in drawing a range of ethical frames of reference by which to account for human-AI partnerships, not least Japan and ethically aligned regions, given their long-standing interests in human-technology partnerships.
The project leads are: Prof. Vian Bakir (Bangor University), Ben Bland (IEEE P7014), Dr. Alex Laffer (University of Winchester), Dr. Phoebe Li (University of Sussex) and Prof. Andrew McStay (Bangor University/Project Lead). Friends include Dr. Frederic Andres at the NII and Dr. Arif Perdana at Monash, Indonesia.
For engagement or info, contact Prof. Andrew McStay and he’ll answer or connect you with the right team member.
Papers
V.Bakir & A.McStay, “Is Deception in Emulated Empathy Innately Bad?” IEEE Xplore, 13 Dec. 2024.
V.Bakir; K.Bennet; B.Bland; A.Laffer; P.Li & A.McStay, "When is Deception OK? Developing the IEEE Recommended Practice for Ethical Considerations of Emulated Empathy in Partner-based General- Purpose Artificial Intelligence Systems", IEEE ISTAS, Cholula, Sept. 2024.
A.McStay; F.Andres; B.Bland; A.Laffer; P.Li & S. Shimo, "Ethics and Empathy-Based Human-AI Partnering: Exploring the Extent to which Cultural Differences Matter When Developing an Ethical Technical Standard," IEEE Xplore vol., no., pp.1-28, 28 Aug. 2024.
Standards
B.Bland, WG Chair (2024, completed) P7014-2024 IEEE Standard for Ethical Considerations in Emulated Empathy in Autonomous and Intelligent Systems.
A,McStay, WG Chair (drafting) Recommended Practice for Ethical Considerations of Emulated Empathy in Partner-based General-Purpose Artificial Intelligence Systems.