IGF 2022 Day 3 WS #337 Assurance and certification of emerging digital technologies

The following are the outputs of the captioning taken during an IGF intervention. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid, but should not be treated as an authoritative record.

***

 

>> ANSGAR KOENE: We can start with the general introduction and hopefully others will join in by the time I'm done with that, Ashley Casovan, probably having some typical issues of the launching of the right app, et cetera.

Welcome, everyone, to the session on assurance and certification of emerging technologies.  What we want to talk about and hoping that we'll get also some dynamic interaction with you in the audience on this, is to discuss some of the developments that we're seeing around he ‑‑ great, hey there.  I have only just started., how emerging technologies and I will focus in on EI among these as a case study, as the type of emerging technology that at this moment is achieving a lot of the attention around trying to figure out how to best regulate assess, provide trustworthiness of this kind ever technology.

Well, what we're see, over the last 20 odd year, digital technology has matured where it has become an integrated element of pretty much anything we're , do through the use of interrupt, the services that are on line platforms that are providing, the technologies like AI, blockchain, other kinds ever technologies like this.  As technology has become more pervasive, has become embedded in services having more significant impacts on people's lives, we see this, for instance, in the use of AI for assessing whether somebody is going to have a chance of having a job interview or not, we see had in the way in which AI is involved in governing the electricity supply in switching between renewable resources in order to be able to manage the sort of the downside of the resources which is that they are dependent on the environment but if you can combine them through this smart grid you can compensate for that and make them just as environmentally friendly, weather, et cetera, resilient as any traditional energy source would have been but actually more resilient because of the distributed nature of them.

We're seeing technologies such as AI becoming important for us to be able to have beneficial outcomes and improvements of our lives but, of course, together with this real world impact that the tech non‑governmental has, it comes the potential that if the tech is not built to the best possible allocations and living up to the requirements that are threaded in order to serve that kind of application then problems arise.  We see they, for instance, to go back to the human resource case, in the form of discrimination, bias issue, problems in ‑‑ in reliability issue, in the failure to sufficiently understand what exactly it is that the model learned and, therefore, not knowing how to interpret the outcomes.

In order to try to address these type of issues and we have seen this earlier in things like the increasing use of online data about people, and the data privacy legislation, we see this move to creation of new regulatory frameworks, providing citizens with basis to say yes, I can trust that I can engage with the technologies without needing to be an expert in them, taken would allow me to actually do a personal assessment of everything., a trusted third‑party, a trusted process has assessed that the system is of sufficient quality addressing all of the Human Rights elements that need to be addressed in order for it to be proper, to use this technology, this particular space.

So I can engage with them.

Now, as we are seeing growing discussions in various countries and into intergovernmental institutions on how to best approach the regulation of these kinds of technologies, we also need to be thinking about regulation in itself, the near existence of a piece of legislation, it is not really enough to provide that trustworthiness.  What you also need, it is a reliable, consistent process behind assuring that compliance with that regulation is achieved.

That's where the whole ecosystem of assurance, certification comes into play which we'll be discussing today.

So we will be looking through our international panel that we have here.  We will be looking at some of the developments that are happening around AI regulation in the E.U. through things like the AI Act, in Canada, through things like the Digital Charter Implementation Act.  We will touch on some of the activities in China as well with the regulation of online content and recommendation systems, but also what the international organizations are doing, the guidelines on ethical usage of AI that came out from UNESCO, the work that UNICEF is doing to look at making sure that the technologies are used in a way that is recognizing The Rights of Children, the work that OECD is doing to bring together different member countries to coordinate between the way in which they're approaching regulation but also the work at the Council of Europe to set up guidelines and rules around Human Rights respecting uses of Human Rights and democracy respecting uses of AI.

There are many other areas where regulation of AI is on the table, it is being discussed, but as I said, they cannot live purely in the space of creating of the regulation.  There needs to be the supporting framework as well.  This is where organizations such as the technical and procedural standards developments, the iso ICT, ITU, IEEE comes into play, to provide clear guidelines that industry across the globe can refer to in order to understand this is leading practice for addressing the questions around how to do risk management, for instance, around addressing how to classify different types of AI systems in order to be able to know what kind of challenges they may pose, thousand do robustness assessments of neural networks, how to approach questions around transparency.

All of these, we need to be working together in a coherent way in order to provide people with what it is that the regulatory approach is really trying to achieve, which is an ecosystem of trustworthiness, a way in which people can know, yes, this piece of technology is trustworthy, I can engage with it without having to second guess at every stage whether or not it is actually forming correctly or this other piece of technology, yes, it looks to have great potential, but it hasn't been assessed.

I can engage with it, but if I do so, I may need to take some other precaution, I may need to use it in a slightly different way than what's gone through this kind of an assurance and certification process.

With that general introduction to the theme of today's discussion, I would like to start introducing my fellow panelists.

We have John Higgins who I think the best thing is if I just ‑‑ if we go through the round and I let each of them introduce themselves, that way they can also highlight how their work addresses to the theme of today's talk Solomon Kassa.

I will go last.  For those that may be curious, I'm Ansgar Koene, global ethics leader here and I'm the proposer of this panel and it is my absolute pleasure to be able to share this panel with my distinguished guests.  I hand it over to John Higgins to introduce himself.

>> JOHN HIGGINS: Thank you very much.  Hello, colleagues.  Good morning, good afternoon, good evening wherever you are in the world.

I thought in my view opening remark, I may just cover three things.

A little bit about my history that helps me and hopefully you understand where I'm coming from, what shapes my opinions.

Second, the sort of things that I'm involved with now, that's irrelevant to the question, the exam question you have set us, Ansgar.

Third, some of the issues that are built on some of the comments that you're making, some of the issues that we're dealing with today and in my mind that raises more questions than providing answers at the moment.

The relevant bits of my background, it is that for many years I represented the tech industry in Europe as the Director General of digital Europe.  That's contributing technology industry perspectives and points of view in the European policymaking process.

Before that, I did very much the same thing in the U.K.

In the course of doing that, one relevant project that I got involved with, it was European Commission strategic policy forum, looking at the Digital Transformation of Europe's industrial basis, industry and not‑for‑profits and relevant parts that play an economic engine of Europe, the Digital Transformation of that.

Just to organize that to make that real, one of the things that I continue to be involved with today, it is with city, European cities over 100 European cities in the intelligent city challenge where the cities are trying to amongst other things use digital technologies to tackle city challenges.  It is great to see it from the policymaking process at the Commission, through down to what's actually happening on the ground in cities.  That's really helped shape my view.

Today I do a variety of different things with universities, professional bodies, think tanks, so on.  A couple of the things that I have been involved with that I think are relevant to the question that we're trying to answer today, first of all, at the concern the practicality of providing this sort of trustworthy assurance within the AI value chain.  It became clear that a number of companies who were beginning to gear up, to think about well, how do I comply with the regulation as it emerges from the E.U., but also the implicit societal demands which the E.U.'s AI act is one manifestation of, but it is not the only one.

Companies are thinking how do I provide realizing that in order to be able to comply, to demonstrate their compliance, they needed to be confident at a others in the value chain that were providing them may be subcontractors, third parties, that they need to be confident that they too comply.  If I'm using datasets provided by someone else, how do I know they're not biased?  I can't know that myself, they're not mine.  How do I get that assurance?  If I'm using an algorithm from somewhere else, you mentioned transparency, one of the most difficult things I think in AI, transparency, how do I know, how can I be confident what this algorithm does and how it will react in different circumstances.

People who are going to be as it were putting their business on the line saying yes, I comply, I'm putting something on the market, bringing something into service, it meets the requirement, I need to be confident that others in my value chain that I'm working with, so we have been looking at that as a piece of work, that's what is in a sense shaping some of the thinking that I can contribute to today's questions when we get into it.

The other thing that is relevant to this debate, it is that for the last year or so, I have been the President of the U.K.'s professional body for digital practitioners, with 70,000 individuals who are practitioners who all want to be professional.  By professional, we mean have ethical values, have the competencies and understand that they're accountable.

So we can all be good, competent practitioners and they are a body of people, as we say in the U.K., 70,000, there are similar bodies around the world, who actually want to demonstrate that they conform to an ethical framework and actually sign up to one.

Certainly we have been exploring what's the impacts of individual professionalism on being able to provide trustworthy systems and you can compare that to other professions like medics, so on, we rely very much on knowing that people signed up to assess professional value, ethic, that enables us to trust the deliverable, not just trust them but the deliverables.  Those are two pieces of work.

Now, those are the first two things I wanted to cover.

Finally, as it were, the problem domain that you described, if I just take the European example for a moment, where are we?  Well, we're in a position in which we in Europe call just beginning the trialogue faces.  There are three European institutions involved in policymaking, the Commission like the politicalized officials that come up, once having been instructed by the Member States, they come up with the thing, and in this case, that was communication on AI, two other Councils, Councils of the Member States come together, they start working on it to put their own imprint on it, generally driven by slightly different dynamics and then the directly elected parliamentarian, MP, based getting together in various Committee structures and they too get foo it.  The three, God knows how Europe work, but it really does, they come together, you end up with the final things or thing and these things are more commonly legislation which becomes law in every country or it could have been and still sometimes is a directive which has to be implemented in local law.

Anyway, they come up with that thing.

Now, when I describe the process even casually like that, it is a lengthy process when talking this for quite some time.

In the meantime, societies, the awareness of something that you need, you often hear people say, yes, of course, we need regulation of AI, well, okay, why?  Because the horses have been frightened by a number of different things where things have happened, often public authority use of AI that's caused outcomes that have made the public go oh, I don't like that.

Something needs to be done about that.  I want to be confident that there is something around this.

You have got increasing societal demands for something that creates confidence in what they're doing.

You have a process in Europe, for sure, that still has got some way to go.  In parallel, you have got international standard bodies developing standards and, of course, the way European regulation often works is that they go hand in hand and that the regulations' implementation is by reference by some standards, European or otherwise.  They are also some way off.

So we're in this situation I think over the next two, three years you have increasing demand but you don't actually have anything to pin it to.

So in that situation, what could be used to fill the gap?  Is that the role now of some sort of trust marks that don't have necessarily have the government backing.

There are lessons from other places that tell us that those can work quite well, trusted trade schemes or even I wonder, you know, just the dynamic these days of user feedback.  You know, the crowdsourcing, the assurance, could that provide something?  The question I think I leave on the table, it is how do we fill this gap, can we have either industry‑led or civil‑society led certification that people can shine up to as a demonstration that they are providing something that complies with something?  I mean, what's it comply with, it is of course having to be worked out in that.

I think the question is, is there ‑‑ does something need to fill the gap?  I think it does.  What is it?  I hope that that's the debate that we're going to go on to have.

>> ANSGAR KOENE: Excellent.  Thank you for the introduction.  Also for putting that question on the table.

Before we go into addressing that, other questions, I'll hand over to Edson Prestes to introduce himself and also perhaps you may want to touch on that question a little bit by referring to some of the work that you have been doing around AI ethics principles and those kinds of elements which to my mind at least seem to also be playing somewhat in that space of an initial approach to how can we think about what does good development, good engagement with the technologies look like, and how can we signal that also to people outside of your own organization.  Over to you, Edson.

>> EDSON PRESTES: Thank you so much.

Good morning, good afternoon, good evening.  It is my great pleasure and honor to be here, with my distinguished fellows.

Before I start my speech, I would like to provide some background because I changed my career some years ago.  I'm a roboticist, I got my Ph.D. but I have worked with technology since 1993 when I got my bachelor's in computer science.

Around six years ago I joined the global initiative on the ethics of AI and I realized how important it is to discuss ethics in the AI.  Before that, I did not have so much discussion about the ethical implication or impact of Human Rights of the emerging technology.

Because I'm in the Global South and because most of the politics, political decisions is not centred on the citizens, I realize the impact on the Global South would be huge.

Before that, because of that, I started working on the ethicals of AI and Human Rights in the digital ways.  I joined after that several initiatives in the industry and also multilateral associations.

In the industry, I'm share of IEEE7007 ethical driven robotic automation systems, that focuses on creating a standard that contains several ideas that guide us about how we can think of creating an ethically aligned robotic automation system.

In this standard, we have different aspects related to the norms in ethical principles that are relevant when you discuss ethical AI on ethical robotics, if you think about robotic system, we would think about the system that should be followed, a set of norms, a set of principles like protected citizen, like a guarantee physical integrity, so on.  Also in this we have the aspects related to the data privacy, protection, ethical violation measurement, transparency, accountability, so we see the concept related to the liability, responsibility description, the data, the data and so on.

So we think that this could help us to provide some guidance in very relevant aspects of regulatory mechanism.

Like if you want to regulate some technology you have to understand the technology.

In that particular sense, they provide this understand, they break the domain in several components and link this competence, like I know the transparency, it is very important, but the experience comes from data.  In transparency, it could be provided in different kind of formats, like we could have a transparency provided in the text, in the video, in the audio, so on.

Also as they try to mitigate particular concerns, that is dependent on the particular audience.

According to the the information provided by transparency, I can also ascribe responsibility that could be shared or not and could imply on particular essentials.  This understanding, it is very important for us, if you want to create fair regulation, because you can have a clear picture on domain and also you can highlight important concepts important for the domain.

But this standard, it is relevant for all the proposal, not only to understand the domain but is relevant when you want to make different nations or different stakeholders to communicate to each other.  That is very relevant.

Observe that the digital domain, it can be seen as Hugh a human barrier.  The different parts of the barrier is different parts of the country and this part of the barrier, they have different stakeholder, and the dataflows, like this block.  So if something happens in the specific part, you effect the entire body.  It is important for us to have a clear communication, clear and precise communication, otherwise if something happens on one hand, you cannot deal with this problem, when this problem, this infection is transmitted to the feet.

So this clear communication, it is important and it is provided by this because of the nature of this.  The nature of this, it provides the clear communication so that leads directly to the human and institutional capacity building that's very important.  Different from the countries in the Global North, some countries in the Global South do not have the capacity to deal with the AI problems.  We do not have this capacity at the moment in some countries in the Global South.

Also in this country, we do not have a Strategic Plan on AI or we do not have a clear understanding about the different impacts of AI and the citizens' life.

Also because of the nature of this digitalology, we create tools for tracking the development and also for monitoring the impacts, imagine that you have differentiations, you talk the similar language and you have a software deck that represents the different digitology, we choose this way to share information.  Okay, you have the specific system that focuses on face analysis that leads to the impact of some marginalized community and this impact effected more black people than white ones.  The regulation that was employed in that particular scenario, they were used in that so observe this particular problem can happen in Brazil, not happen in U.S., only U.K., but why the U.K., why they need to expect this problem to happen, to take an action.  Why not to understand what is happening on other parts of the world.    .

Using this we can create a framework that's multilevel focused on different nation, but focused on regions and on a global scale.

In that sense, I'm involved with UNESCO, a member of the Ad Hoc expert group that creates the recommendation of artificial intelligence and one of the recommendations that we proposed in this document, it was to all AI mechanisms are inclusive, multidisciplinary, multistakeholder and multilateral.  So using this framework, I can allow the participation of citizens, no government organization, government organizations, industry, to discuss the domain, to discuss the issue, to discuss about how to mitigate the issues.

Also if you speak the same language, you could share this information across different areas.

You could also engage in people from different segments of the society because as we know, the government other, the Civil Society, they have just a partial view of the domain, like for myself.

I will talk for myself.

When I was working only in robotics, a focus on the robotics, a focus on accuracy, precision, repeatability, but not focused on the future aspect.  It is very important to have this holistic view about the domain to see, okay, this technology will impact on the democracy, this technology will impact on the developers of society.  This technology, it will impact on the education, consequently it will impact on the industry.  This technology, it will impact on the Right‑to‑life because if the English does not understand the impact, we have some machines that will decide if someone will live or not.

If you have this understanding that's transparent or apparent, other, you have some fair regulation.  But observe that it is simple to say that we have fair regulation and it is a regulation, what is actually, it is a certification, it is just a check box, to say, okay, this season has transparency level 3, it is for a user, safe.  We have to go further, we've to have a more risky evaluation of this domain.

Also I'm involved in the United Nations Secretary‑General high‑level panel on digital cooperation.  Where this panel that I'm a member of, proposed some mechanisms to strengthen corporation in the digital domain across nations.  The idea, it is to mitigate potential issues and to promote digital domain and the benefit of society without leaving someone behind.

We propose some recommendations, and one recommendation, it is very, very relevant for us, and it is aligned with this other aspect that I raised before.

When thinking about the technology, we try to focus on particular problems, like discrimination, misogyny, focused on environment degradation, interference in information that is received, but we have to think in the more abstract level, not only of the problems.

In that sense, it is important to think about what element is relevant for us.

What elements are important?  This is perfect alignment with the different Human Rights instruments that we're proposing, like Universal Declaration of Human Rights and also some Convention, conventions on The Rights of Children, Convention against discrimination against women, so on.  How do we make this reflection on this principle that appears in this document to the digital technologist?  How do we map that.

In the work that we propose for height level panel, we look at the Human Rights instruments to the digital age.  How do we apply these instruments to the digital age to make the technology and the domain more oriented to the citizens?  More oriented to humanity, the planets.

That's my short introduction.  Thank you so much.  The floor is yours.

>> ANSGAR KOENE: Thank you very much.

Thank you.  I think you have sort of sketched also nicely the breadth of the kind of issues that come into play here and a number of points we may come back to in our discussion now, such as the role of multistakeholder fora and how best to involve the breadth of stakeholders that are needed in order to be able to really deal with technologies of this type that bring with them both from the technical side complexity but potentially more importantly from the social side complexity in the very many different ways in which they impact on society.  We need to be bringing together the various stakeholders in that space in order to be able to create the good frameworks that we'll be able to provide the frameworks that provide the safeties and guarantees people need.

Also, I think it is important how you were referencing to Human Rights instruments and both their role, but potentially also that this may be something to think about when we contemplate the question about how to get to a global level kind of agreements and a cross‑border way of thinking about these issues.

Before we go down that loud, I would introduce the final panelist from our team here, Ashley Casovan.  I hand over to you.

>> ASHLEY CASOVAN: Thank you so much.

Thank you.  It is a real pleasure to be here and participate in this conversation.

Thank you, Ansgar Koene, for your work in organizing such an important topic at such an important inflection point in history.

My name is Ashley Casovan, I'm the executive Director of the responsible AI institute.  I'm very interested and I have been throughout my career in this intersection between policy, application to practitioners, the role that social technologies play.  From that perspective, up until recently I have been a public servant, I work in local governments and I move to the federal Canadian government where the previous position was leading efforts related with strategy and policy development for the Government of Canada on the governments' use of data and the government's use of opensource technologies and finally it led me to this path around artificial intelligence.  It didn't seem at the time that it was coherent, to be able to talk about all three of the things together.  I'm seeing similar actors and stakeholders in the space to be able to have important discussions that are really related to a lot of things that both Edson and John talked about already.

From that perspective, my experience in thinking about the implications and potential issues related to the use and management and governance of data as it relates to digital technologies made me quite interested to think about some of those implications from again a public service perspective that Edson was speaking to, and I think what's really interesting about some of the examples that he provided is that independent of where we exist in the world, the different types of services that governments are providing, we're all really experiencing some of the same challenges and questions for ourselves around how these technologies can both enable better service delivery, better access to important information for people and also how do we manage them in a way that is meaningful and mindful of people's Human Rights and privacy, security, et cetera.

So from that perspective, I became really interested after we developed in Canada, the first national policy on the government use of AI systems ensuring oversight, appropriate oversight of those called the directive on automated decision making and from that perspective we set out nine requirements, many of you who are better familiar with this space will be familiar with the concepts because while we were I think quite early on in establishing these types of policy, we really crowdsource these ideas from subject matter experts all across the world in terms of thinking through these different implications that these systems carry.  That includes things like ensuring there is appropriate governance, third‑party review of the systems, making sure that if there a human kept in the loop in the process while we're still understanding the implications, ensuring that there is notification that you as a person are actually using a system that has artificial intelligence components.  Then one we hear about quite a bit, it is around really connected to protecting Human Rights is related to bias and really making sure that in some of the examples that Edson provided, that the systems are operating in a fair, safe way for people.

So from that perspective, I think that it was a really interesting bar to set in terms of thinking through how we can put the principles or concepts into an actual policy that then would have requirements and have implications for compliance.

However, and why ‑‑ it led me to what I'm doing now, it is that when we were seeking to implement this in 2019 when we went, spoke to the practitioners that were experimenting with different types of AI system, they said what standard do I follow?  What do I need to do to demonstrate compliance with these objectives you put there?  We don't disagree with anything, but we really need to understand what does good look like to put it in a very plain way.

We had spent so much time in figuring out what the requirements should be, what the oversight should be, how do we take all of these different principles that are out there that international organizations, John mentioned were starting to think about this at the time, how do we take those and put that into a comprehensive policy, we really didn't get to the guidelines or the standards that were needed to really help practitioners then be successful with those objectives.  That question really sparked my interest in standards, obviously coming from a background of data architecture, prior to that work while I mentioned I worked at this intersection of policy and data and social technologies, that came about because I was involved in or led the development of Canada's open government portal.  It was a very practical application of collecting data, managing data, then trying to release it to the public in a way that's meaningful and that of from that, I got to see from both a builder and a user perspective some of the needs for standards, standards around licensing, standards around if you want to ‑‑ one thing to release the data, if you don't have a standard license, it is really difficult for the public who you want to be the users of the data to then combine data from Canada, data from Brazil if we have a different operating procedure.

So from that perspective, when this question was asked to me around standards, I was like, oh, yeah, when I was a practitioner, I also wanted to know that and we want guideline, guardrails to be able to follow in order to actually comply with these concepts which in many circumstances can be quite abstract.

One of the other things that was really interesting when implementing this directive, it was that Canada has 256 different lines of business.  You don't often think of governments as businesses, if you put it into ‑‑ they are really large multinational corporations that are doing everything from transportation, Canada has Canadian rail and we're really trying to think through how to avoid train derailments, monitors were put on trains.

We also were thinking about making sure that people were getting access to benefits a lot easier.  Then we were also ‑‑ we had things like the national capital Commission that wanted to alert the public on weather predictions to come and visit the capital of Canada, Ottawa, there is a beautiful canal in the wintertime for those that haven't been and people like to skate on it, but de bending on the weather, we're seeing more and more with Climate Change, the canal is not actually open that regularly.  People plan the winter vacations expecting something and so there was a really lovely idea to be able to help to predict when the canal would actually be open.

That said, these are three examples of very different applications and lines of business if you think of it from that perspective and they carry different Harms with them, some more significant than others.

While the weather prediction is very nice, the worst case scenario, it ruins someone's vacation, where we want to make sure there is accuracy related to train derailment and make sure that there is a fairness related to the fairness and accuracy related to the delivery of benefits is one of the main priorities or main objectives of government.

From that perspective, to have a catch all directive for all of those different types of lines of service, that was really, really difficult to put one standard around this.

It became clear to me, we're starting to see this now with the drafting of legislation, other countries, the need for context specific both regulations, but also then standards to support that effort.

That then led me to the current work that I'm doing, I took a leave of absence from the federal government and joined what is now the responsibility AI institute and in that capacity what we're focused on, building a certification program that is contact specific for different types of AI systems, recognizing that as per my previous example, they all hold different ‑‑ they all have different types of both opportunities but then challenges or impacts or harms that are associated with those systems.

So from that perspective I'm quite interested in how can we take and draw upon a lot of the work that I did in Canada and my fellow panelists have been doing and really convert those concepts into then standards that have really specific thresholds that say, okay, this is what good looks like for medical decision making systems.  This is what is acceptable practice for notification when there is a conversational agent that is being used, whether it is the government's 311 call centre, whether it is your local Telecom company, we should really kind of have those same expectations and service levels.  That's really where I have seen that standards play a very significant role.

One challenge though with standards, it is that based on the requirements I was talking about earlier, I didn't even go through all of them as I mentioned, there were nine of them, that said, that each of those contained different objectives and then each of those would rely on different types of standards and so what I found was really challenging, it is that we needed to really instead of having a patchwork of different standards for everyone, put that patchwork together for practitioners so again really thinking about that from a practitioners perspective, I want to do the right thing, but I also need some help in navigating what that right thing is.

I also need to balance time and cost and all of the things when developing these systems, and so to have a guessing game of what good notification looks like, what a good bias test looks like, it is not very fair.  That's when I got interested in private governments and certification programs that would then be kind of the umbrella standard for all of the different standards if we say these are the nine, six, high‑level objectives, whatever they end up being, that we're agreeing upon, that we're trying to meet, here is the path forward for meeting those.

So what we have done at responsible AI institute, it is we have been working to create a comprehensive certification program, we have started in the domains of employment, health, finance, because we saw that that's where ‑‑ there were both opportunities in terms of increased use of systems in the domain, AI systems in the domains, but also we saw that those have potential implications on people's health and livelihood and so wanted to really think about those implications in a meaningful way.

Then we really put forward a methodology to kind of map all of these different challenges that exist and I had the great opportunity do something for the second time in a relatively short period of time and so I learned a lot from how we had built the algorithm impact assessment that was a very important companion to the directive on automatic decision making and thought how I would do that better and so from that perspective we have built this, what is ‑‑ I say a certification program, it is a conformity assessment scheme is really what it is, which is a set or a suite of those different standards that then have context specificity for the domains that I mentioned.

We ‑‑ what was really important to me, and the background I had in data standards was very helpful, I recognize the importance of doing that through a formalized process so as was mentioned by John and Edson, having government approved standard, government approved conformity assessments, it was important to me from the very beginning of this initiative.

So I have worked closely with Standards Council of Canada to submit our scheme through this process, technology, it doesn't have the same boundaries that other types of tools do we learned so it is a harmonized review with ‑‑ between Canada, the U.K. who also has a harmonized review process with the E.U. and the U.S.

Then we're looking to expand that to other regions, interestingly enough, this is within the ISO framework, there are now a requirement as of January of this past year that all conformity assessment schemes will have to have a harmonized review across the international accreditation forum.

So there is I think overall a sense that as our world has becoming more globalized, it is really important that our standards become globalized as well.

Obviously different regions as particularly when talking about socio technologies have different types of implications.  So different regions might have different subsets of these standards and so we're really trying to think through that with various different types of pilots.

What in addition to my role as Executive Director, I'm also Co‑Chair of the Canada's newly formed AI standards collaborative and it was an initiative that was announced in the 2021 budget and it is a recognition that in addition to supporting broader AI development in Canada that there needs to be some guardrails that are put around these applications and in particular what was interesting was the emphasis on the need to develop conformity assessment schemes.

So with that hat on, I am really trying to help to advance these ‑‑ some pilots, we haven't started those ones yet, with other regions that will be able do some comparison between the conformity assessment schemes, see what works in some regions, see what doesn't work in others.

We're really seeking to understand how we can get to at least a common set of practices so that again consumer or user of the systems has both trust and assurance that those systems are doing what they say they're going to do and that they're protected when using them.

So as I said, we have started in just a couple of domain, a few domains right now, and we're really trying to get our methodology down pat in terms of how we develop it in a very systematic way making sure that we're mapping to great work that's been done at UNESCO, at OECD, World Economic Forum, other things, other international organizations putting a lot of thought into this, GPA, there is experimentation happening and then leveraging the existing standards that are either in place or in development at ISO, IEEE, et cetera.

From that perspective, these ‑‑ once we completed that mapping, then we have established external Working Groups that really provide us with good feedback from a subject matter's point of view in terms of what's acceptable, what's not acceptable, and how we can really make sure that those systems are meeting this idea of safety and trustworthiness and fairness, that we're looking to ‑‑ that we're seeking to achieve.

So from that, once we have this methodology in place, it is tried and tested, we will be able to scale and expand to a lot more domains and a lot more regions.

I'll leave it there.

Thank you.

>> ANSGAR KOENE: Thank you very much, Ashley.

Again, you also covered I think quite a broad range of the various ways in which mechanisms that can help people to gain some levels of how do I know whether or not I can trust this system or not, certification being one of those that you pointed to, but also the different roles of how government services, being one type of application of AI that obviously touches on so many people and that people, where people don't have a choice to ‑‑ they need a specific level of attention and thought behind these things.

One element that was especially interesting, I think someone touched also on this, to the inherent tension, it was the point that you were making about contact specificity, it is on the one hand, of course, the way in which you're using the technology in different application context but also in different cultural context, brings with it different kinds of requirement, different kinds of things you need to be paying attention to.  At the same time, given how broad these technologies go, we need to be thinking across context, we need to be thinking across countries and regions to provide a coherent way of addressing this.  How do we deal with this kind of inherent tension between the context specificity of the problems and the breadth across which these technologies are being used.

So just ‑‑ in just a minute, after I do a short introduction around my background, we will be opening up to questions.

I do encourage the audience to also already start thinking about that, if you have question, do start putting either the question in to the chat or raise your hand.

Myself, I'm the global AI ethic and regulatory leader at EY, I sit within the global policy function, as such, my primary sort of role, it is to engage with the policymakers and to make sure that we can communicate in both directions what are the particular concerns that policymakers have, what are the particular concerns and ways in which industry is using these kinds of technologies.

Obviously, EY as a professional services firm that is both engaged in doing assurance, financial assurance, but also increasing the providing assurance and assessments for technologies, and also is heavily involved in consulting, including technology consulting, specifically we have a framework around trustworthy AI.  And trust is at the heart of what EY is doing.  As such, of course, engaging with, understanding what are the mechanisms by which trustworthiness of the organization and those using the technology, as John pointed to, the people in the organizations that are running the systems, how can, you know what, are the best mechanisms that we have to think about, what are the procedures that we have to be putting forward in order to do consistent assessments of that.

I think one thing also to think about from the policymaking perspective, and this is not just for the policymakers, but also for the industry and for Civil Societies if we don't think through sufficiently how do we actually translate regulations into application, into successfully building toward the compliance of them and then we'll see failures of the regulations.

Those will then also trigger people expressing discontent with the way in which the technologies are being used and they will be demanding new pieces of legislation, new pieces of responses to that which is what leads to sort of reactors, regulation creation which tends to be less detailed in the way in which it is crafted than sort of the more deliberative approach where policymakers have more space to understand the problems from multiple dimensions before getting in this.

We need to be thinking how can we provide the right kind of information through multistakeholder platforms, through communication channel, what is the best way in which we make sure that all parties understand what the challenges are and also what are the ways of addressing those challenges that will work and be implementable.

I will keep my introduction short in this way so that we have more time for the discussion.

Please, I invite everybody to raise their hand or to put questions into the chat.

While we're waiting for that, I wanted to put forward a first question for the panel.

It is as we're looking that the landscape of developing regulations that are happening in different countries, in different spaces, we also are seeing some attempts to try to create something that goes across individual countries like the work that Edson has been involved with in UNESCO, to what extent do you see that there is actual fragmentation or are these different regulations going in the same kind of directions and sort of what can be done to minimize that we go into different directions and that we can actually get a space of addressing the challenges of the technology coherently across different domains and countries so that we can implement them successfully in organizations that work across borders and tools that are being used with cross‑borders.

Happy to open the floor for responses or other challenges.

>> JOHN HIGGINS: I'm happy to chip in to get the conversation going.

My sense, it is a sense of having briefly looked around, it is that there is a global ‑‑ maybe global is too strong, there are con versions, when you look at the different countries approaching it in different way, there is a convergence on the main things, transparency on the loop, if you look to the U.S. and look to law firms and say what guidance can you give me, you look to the U.K. to the ethics and innovation and compare the guidelines, they seem to me fairly similar.  I think that there is a ‑‑ there is a convergence at some level.  I would be very interested to know what the other panelists think about this but there is a convergence I think going on.

>> EDSON PRESTES: Thank you for the question.

I also agree, that there is some convergence.

For me, I'm not worried so much about the guidelines.  Guidelines, they're not enforceable, just recommendations, but when thinking about the regulation, I'm a bit concerned because of the fragmentations that may be across the globe.  That's in itself something that we have to worry about.  If we have fragmentation, we do not have ‑‑ you can, in fact, prevent some of the international trade, you can prevent about how to mitigate some global issues related to the cybersecurity for instance and if you think about how to foster some cooperation that creates Common Regulations, just to create an example, what you consider with the high‑level risk AI system, this classification, it is similar to Canada, similar to Brazil, you look at the similar classification.  It is possible to look at the high‑level risk application or the middle level risk application easily to Brazil, if you do not have this disharmonization in terms of concepts we do not have a clear understanding and we also can create some problems with international trade and mitigation of a potential issue.

That is something that we worry about in that fragmentation.

>> ASHLEY CASOVAN: I will add to that.

I think that's the point that Edson is making, it is one of the purposes of ‑‑ stated purposes of the Canadian AI act.  Increasing international trade, both internationally and nationally within the provinces as well.  Then also, ensuring the protection of individuals. 

From that perspective there is obvious different cultural norms and obviously going to be different oversight, even implications within nations.  I mentioned earlier Canada is ‑‑ we have a rail system, we have a rail system but not a passenger rail system in the same way that many other nations do.  So these are things that there is just different types of services and oversight requirements that exist.  I recognize that there can't be a specific one to one between all of these different regulatory objectives.

However, I think that the intent can be really aligned and then this is I think why it's incredible to harmonize at the standards level because even though the regulatory frameworks may not be exactly the same they can be close enough or there can be aspects of them that are common across different types of jurisdictions that then allow for the interpretation, the translation of those through common standards to really have the effort of government, Civil Society, academia, industry to build a sensible standards that demonstrate compliance with those stated objectives, still keeping in mind those differences, inherent differences and I say that too because we're talking about this at a national level, obviously given the setting of the UN, that's the focus, but we are also seeing a lot of regulations coming out at state and local levels across the world.

Again, to avoid having this patchwork of different types of compliance requirements that becomes important.  Even in 2018 with the release of a lot of principle documents, we're all kind of edging towards a very similar type of objective.  As always, the devil is in the details of how we do this.  I think by having conversations like this, having then this group of experts working together on really thinking through what the implementation factors are, then we can get to ‑‑ get to those objectives of both protecting people and then having international trade be viable.

>> ANSGAR KOENE: Great.

What I'm hearing is that on the one hand, on the high‑level, there is quite a bit of general agreement on what should be and what should not be if we look as things like the AI principles, pretty much all of the major AI principles have cohered around the same kind of set.

When you, of course, get into the details of regulation, there is different histories, different ways of law making, different countries that lead to differences in specifics.

However, there is every piece of regulation dealing with how the technology is being used and having impacts on the citizens and the technology being used, you know, there is two parts to that.  One, how the technology actually functions and the other, it is what are you doing with it as a business, a private sector organization, some other kind of organization.

So in a way, on the technology side of how the technology actually functions, where you're primarily referring to things like the international standards that are being developed around this you may be able to largely refer to the same set of standards because the difference you get, it is lesson ‑‑ between the different countries, that difference, it is lesson the how do you build the technology and more on the is it okay?  Do we think in our particular culture that doing biometrics and face recognition for certain kinds of applications is acceptable?  Yes or no?  Some countries have different opinions on these kinds of things.  Then you say, okay, well, we have a standard around how things like face recognition technology should work and is it okay to use that in this standard, that is a way you have to always be looking.

I see an interesting dynamic there between the role of the standards which is sort of more of the technical ‑‑ the body of technical experts that are creating that, then the legal side which is the cultural norms of the different countries.

>> ASHLEY CASOVAN: Yeah.  If I could just add something, to what you said I think that's really, really interesting.

I feel like there is a lot of discussion and maybe a misplacement of the role of both standards but then also regulation or the distinction between them.  I say that because when we're thinking about standards and putting some guardrails around appropriate use, it is not that determination of whether or not you should do it.  It is the assumption that this is being done and, therefore, if you are going to do it, this is how that should function and trying to kind of keep the lid on some of these things.

I think that the role that regulation can play is making a determination on whether or not there is acceptable use within a certain nation on things like facial recognition systems for policing, biometric, et cetera.

That's a decision of that country to make.

So I think that there is that inherent difference there and often we get asked are we going to advocate for a certain type of technology not to be used?  That's not the goal and IC standards, maybe my fellow panelists disagree, but we're really thinking, okay, if this is already on the market how do we ensure the protection of people from that system that exists?

>> JOHN HIGGINS: I entirely agree.

Further to make a distinction between policymakers ‑‑ policymakers, they're supposed to be using their fine judgment to decide what society wants, then capitulating that into law which the regulators then put in place appropriately, you know, the mechanisms to enforce.  One point that's always worth I think always worth remembering, it took me a little while to realize this when I was in Brussels, it is in an awful lot of the focus of the European regulation is harmonization across the internal market.

So when you talk to senior officials and commissioners about their main challenges, it is stopping them, the Member States from going off, doing their own thing and therefore fragmenting the market.  In a way, it is quite a good model to look at for the challenges that you get with harmonization.

Of course, Europe is a sunset with similar values is, they're not ‑‑ a subset with similar values, but not identical.  You have to think of the different values in, you know, just pick some, Hungary, France, Spain and the Nordic, you get quite ‑‑ there is a wide set of different societal values.

It is quite ‑‑ it is a good model to look at, how do you harmonize policy, first of all, do we accept facial recognition via use of facial recognition by police authorities or not?  You know, those are ‑‑ and the European Union, those are mechanisms to driving those things and gradually, getting to opinions.  I entirely agree, it is not the role of standards.

What standards can do, it is to help policymakers understand whether something can be achieved or not for sure.  You know, the technical experts get together, they provide ‑‑ they help inform the policymaking process from an achievability point of view among other things, it is not the role of standard bodies or standard sets or norms do that.  We have to leave the politicians with something to do.

>> EDSON PRESTES: That's quite interesting to hear, John and listen.

That's very aligned with our proposal about AI governance mechanism.  It is necessary to be a multistakeholder, multilateral, multidisciplinary, governments do not understand technology.  Why doesn't the government understand about AI, the data of AI, what is a AI by the way?  So it is necessary to have the participation about something that works on the ground, that something that is impacted by technology.  That's very important.  Otherwise, you have the fragmentation market.

In terms of how far can you deal with the different regulations, there is fostered the idea of the multilateral mechanism, otherwise we will have the regulation that was developed in Canada, regulation that will be developed in Brazil, the U.K., we have to create some compatibility between this regulation.

Imagine a greater compatibility document for each pair of states.  We make no sense.  It is necessary to have a global organization to coordinate this process.  This document that was created by the Ad Hoc expert group, it was negotiated by the experts, by the Member States of UNESCO.  It was challenging to spend two years to produce this document, but that's happened.

They agree on the vision, they agree on the action but now they need to implement those actions.

I believe multilateral organization like the UN is a best place to do that because they can convene power, they can put in the same table the different stakeholders to discuss the technology but to leave it in the opportunity for the regional, the local government stakeholder to also express the view about the domain, like mentioned, it is necessary to have a framework that's global, but you need to deal with this specificity.  So in my view, we need to, to think toward regulations and proposal of global regulation, global policies, at the global level and still to be situated in the local level.

In a similar way W we need to have some mechanism to observe what is the impact of technology in the local level and also to try to identify how the technology is impacting in the different part of the globe and cry to create a generic impact assessment used across the globe.  So this multilevel structure, it is necessary, to avoid the fragmentation, to foster cooperation and to create something that will not block any kind of the development cross the world, like I mentioned before, the domain does not put limits, in fact, you don't have the limits for the stakeholders in the digital domain.  You do not have this image.

Thank you.

>> ANSGAR KOENE: An important element is talking the need to have agreed upon assessment frameworks.  Even if we're going to have different countries saying yes, we agree to the general guideline direction that you're having, but in our particular context, we feel that X or Y is acceptable, but Z and Alpha are not, if everybody has, the kind of application developed in this country, that country, it is assessed in the similar kind of way and reported on, the outcomes of the assessment are reported on in a similar kind of way making it possible for people in every other country to look at it and to understand whether or not they fear it would be appropriate for them to use it even if you say in that country, we think this is okay, in that country, the other country, we don't.  We have same way of assessing it and being able to understand whether this particular kind of technology is appropriate.

I was wondering if you or maybe also Ashley Casovan or John through the kinds of work that you're doing in this kind of space are aware of activities that are happening towards developing these kinds of commonly agreed upon assessment frameworks for impacts, risks, compliances, of the AI systems or applications.

>> ASHLEY CASOVAN: Yeah.  There is an ISO standard ‑‑ go ahead, Edson.

>> EDSON PRESTES: Please.  Please.  Go ahead.

>> ASHLEY CASOVAN: There is an ISO standard forming, it will be guidance related to Best Practices related to the development of them, recognizing that there is context and cultural distinctions having to be made.  We're advocating in the appendixes to be examples of the systems.

Then ‑‑ sorry, the assessments.

And then at OECD there is efforts underway to do risk classification.

Similarly, at this point in time, it will be deaf initial and looking at what are complaints related to it, but the intent, ideally, it is then to come up with a tool for how that evaluation is done.

One thing I would say that we have done some work on is really thinking about how these ‑‑ how they're emerging and so what we did in Canada with the impact assessment, what we're doing now is more quantifiable and so you have a qualitative assessment version of that impact assessment and then there is qualitative which is looking at it from a way that we saw it with these places piloted with the national health system in the U.K. around having a common set of questions but those being a free text field ultimately where it was more about having the dialogue and there is pros and cons to both of these.

I think that one of the biggest challenges is that with the qualitative approach, it is that while it is more thorough, you have some challenges related to the scalability of that.

So I think that again, that is some ‑‑ emerging trends I would say, but not any standards yet in this space.

>> EDSON PRESTES: I do not have any work at the moment in the impact assessment.  I have some work on the development standards and the idea of the standards, it is exactly that, to enable the interoperability among different systems.

In general we propose some high‑level concepts that could be initiated locally.

Like when you think about a robot, it is such a concept, it is a generic concept that could be specialized in the system that is used for medical applications we could also use these agents as a sub class for the agents that are used in the industry domain.

One of the main benefits of having this structure, it is all information that I collected from the application could be shared in the medical domain.

In terms of a generic impact assessment, you could think in this direction to have a generic impact assessment and then have this different level of impact assessment based on the future, based on the region and so on.

So every time that I need ‑‑ I need to see if the impact assessment proposal in one region is compatible with the impact assessment proposed in another region, we check just the parent that gives rise to the different assessment.

An engineer could help with this result, it is a very technical, it is not political.  You could develop technical structure to be use the for political purpose.  That's the proposal of the standard created.  It is focused on the industry but could be used by the government.

Thank you.

>> ANSGAR KOENE: Thank you very much.

Unfortunately, I see the time has flown by as we have been discussing.

I think we could go on for quite a while longer.  There are many points that I still wanted to pick up on, even just on this question about how to do assessments that are commonly understood.  I would love to go into also the question about picking up back on John's point regarding the individuals, the certification, the individuals working in this space, how do we do these kinds of assessments.

Unfortunately, the time has already past.

I do hope everybody has benefited from the discussion that we have been having today.  If you are interested in continuing to engage with us, in this space, feel free to reach out.  I believe the IGF platform has a way to contact other people here.

I will also just put my contact details into the chat here for anybody who would like to reach out.

If others in the panel want to do that as well, please feel free to do so.

With that, I fear I need to close the panel.

I thank you all very much.  I hope to see you at next year's IGF.