Hostile Discourse in AI Governing Philosophies: Permissionless Innovation vs. Precautionary Principle

Hostility towards regulatory precaution is embedding itself within high levels of AI technology policymaking in the USA and the UK. Policymaking in some quarters is hijacking language to present precaution in the field of technology as tantamount to luddism, in order to advance a specific neoliberal and libertarian agenda.

Gilad Rosner (IoT Privacy Forum) and Vian Bakir (Bangor University) were funded by the EPSRC’s Human-Data Interaction network to consider governing philosophies in AI policymaking. Their White Paper focused on the Precautionary Principle (which considers the potential of future harms, and takes anticipatory action before full certainty is achieved) and its antithesis, ‘Permissionless Innovation’ (which presents government regulation as the enemy of innovation).

After studying the evolution of the concepts of Permissionless Innovation and the Precautionary Principle, and reflecting upon the wide range of social and democratic harms ushered in by minimally regulated AI, they recommend that governance actors in the area of digital technologies actively use the language of the precautionary principle to:

-       Communicate nuanced stances on regulation of innovation

-       Eliminate straw man caricatures

-       Reject the false choice of innovation versus regulation

-       And acknowledge that innovation does not always have beneficial outcomes.

 

They note, as a case study, that the proposed EU draft regulations on AI expose as bad theatre the false dichotomy of innovation vs. precautionary regulation. It is possible to have both.

For instance, the draft regulations propose to ban certain AI systems. These include those that use subliminal techniques to affect someone’s behaviour in ways that could cause harm; exploit vulnerable groups via their vulnerabilities; classify people with social scores that lead to unfavorable treatment in contexts unrelated to the original data context; and perform real-time remote biometric identification in some circumstances.

However, the draft regulations take a light touch to governing other AI systems. Indeed, they contain two key regulatory mechanisms that are in perfect alignment with permissionless innovation tenets: codes of conduct and transparency.

The draft Regulation requires the European Commission and Member States to facilitate the creation of voluntary codes of conduct for non-high-risk AI applications to be able to conform with elements of the overall Regulation – in other words, self-regulation.

It also specifies transparency obligations for certain types of AI systems, including emotion detection (an area that readers of this blog will know is generating much innovation globally, as well as a wide range of social and democratic concerns). Such transparency serves to educate the public so that norms may emerge organically.

strawman.png

Eliminate Straw Man Caricatures

Vian Bakir