New report time! ‘Emotional AI and Children: Ethics, Parents, Governance’
Teaming with Dr. Gilad Rosner from the IOT Privacy Forum, we set out to explore the socio-technical terms by which emotional AI and related applications should be used in children’s toys, if at all. Alert to scope for corporate abuse and questionable ethics (and legality), we also saw scope for enhanced interactivity, heightened play, enjoyment and, depending on the design of the toys, potentially even creativity.
Report here.
We were especially interested in the views of parents so, in addition to interviewing experts from the emotional AI industry, policy, academia, education and child health, we carried out UK surveys and focus groups (the latter with much help from Dr. Kate Armstrong and the Institute of Imagination in London).
Take a look at the report, it’s not long and we’ve kept the language very clear! In short, fairness, support for parents, care for childhood experience and good governance, are going to be key when these toys and services properly emerge. We’re convinced this a matter of when rather than if, especially when cast against the history of toys and automata. Yet, there is unlikely to be a tipping point either. Like so many technologies that once seemed weird (like wearables), affect and emotion sensing will (for better and worse) become become omnipresent with little fanfare
The report itself (and academic papers to follow) consider embodied AI and conversational agents, but also safety and wellbeing items too (including wearables). This is fraught in so many ways (incl. questionable methods, privacy, data protection, influence and unhealthy relationships with AIs), so we set a high bar in our conclusions and recommendations. Take a look and tweet us at @EmotionalAI and @GiladRosner if you have thoughts.
BIG thanks go to the HDI Network+ team and EPSRC who made this study possible.