pairfacesx1962.png

All smiles, or is there more to it?

There is much to value in Emotional AI. After all, what it promises is better experience of services, devices and technologies. Indeed, this class of data offers designers, artists, citizens and others novel forms to understand the self and others. However, as with many other aspects of digital life, there are wider considerations that give cause to question the rollout of these technologies. Citizens, researchers, policy-makers and industry folk might consider the following:

  • Overall, it is it desirable that emotions are machine-readable?
  • Will this data always be used in a manner that benefits citizens?
  • Are protections adequate? That is, are our laws and regulations appropriate?
  • Is the spirit of data protection appropriate? This tends to focus on identity, but is identity the principal issue?
  • Do we want retail outlets detecting emotions? Do we want shelf-level cameras reading emotional expressions?
  • Are we OK with social media companies registering mental states, emotions and moods. Being happy one is thing, but what about depression? Don't forget how they make revenue.
  • What of third-party uses by data brokers? After all, companies in the business of selling want to know how we feel.
  • Are we OK with insurance companies using data about bodies and emotions to make decisions? (Such as car and health insurance)?
  • Are we OK with at-home emotion tracking? (Voice agents, media consoles and even sex toys (yes, really))?
  • Are we OK with at-work profiling (to judge stress and decision-making effectiveness)?
  • Are we OK with mapping citizen reactions to cities, urban spaces, urban features, buildings and events?