Public organisations are interested in using AI, for example for risk identification or assessment, to prioritise riskier cases over others, to use AI for intelligence or predictive models, or to make policy more responsive to societal needs and policy effects. There is a lot of data that they use and they are using enhanced capacities to collect and process different kinds of data to these ends. The advanced technologies can help make the work of public sector professionals, policy- and decision makers more effective, more efficient, and ideally could make it more rationalised and evidence-based.
Within such use, the legitimate use of algorithms by government, depends on the capability to protect, promote and live up to public values. As algorithms may make the work of government more effective and more efficient, it promotes values such as proper use of public funds. Yet we are also and increasingly aware of threats to values stemming from bias, equality of treatment and privacy. Given their responsibilities, governments – way more than the private sector – seek to address these challenges by the responsible deployment and use of algorithms. Key for societal acceptance of the use of AI by government, is that they are transparent about the way they deploy these technologies, what limitations exist and how they deal with that. In the end, society must be able to hold governments and their agents accountable for their decisions and the way these decisions came to be.
With my new team at Leiden University we approach this topic from what we call a ‘policy realism’ perspective. This relates to data, organsitional and policy aspects. We focus on the important questions of what realistically the data governments have allows them to do, as data itself is a really tough topic. Advanced algorithms are however also notoriously hard to understand, especially for those who lack relevant expertise. The choices, limitations and trade-offs that are reflected in the development, deployment, configuration of an algorithm, or interpretation and visualisation of the outcomes thereof, those might be lost on the user of the insights produced by the algorithm. And if the user doesn’t understand the choices, limitations and caveats behind the insights, this does not find its way into the decisions or policies supported by the analysis.