In 2019 artificial intelligence is still a buzzword, providing the opportunity to have policy debates around the societal and individual harms and benefits of automated decision making, big data, machine learning and robots under the same umbrella term, depending on the agenda and taste of the given event organiser.
While all these conversation about Artificial Intelligence with a capital A and I are painfully stuck in between voluntary ethics guidelines, sandboxing for innovation, and calls for the application of human rights frameworks, the application of AI systems is being written into laws.
Instead of generally comparing the most prevalent policy tools on the table that are being characterised as frameworks for artificial intelligence (eg. ethics guidelines, impact assessments, regulations) we will pick one very specific and well-defined AI related situation/decision/case and we will see what answer or solution those different policy tools would give to that problem.
The three policy tools that we will consider using as a framework:
An ethics guidelines: On Monday 8 April, the European Commission’s High Level Expert Group on Artificial Intelligence (HLEG) published its “Ethics Guidelines for Trustworthy AI”. The concept of trustworthy AI is introduced and defined by the Guidelines as a voluntary framework to achieve legal, ethical, and robust AI. Alternatively, we would pick an ethics guidelines developed by a private sector actor.
AI Now’s algorithmic impact assessment model
A human rights based, normative framework: we believe that by the time of the event the Council of Europe will have released a draft framework relevant to AI