IGF 2022 Day 4 Lightning Talk #2 A global framework to AI transparency

The following are the outputs of the captioning taken during an IGF intervention. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid, but should not be treated as an authoritative record.

***

 

>> EMMA GIBSON:  Hello, there.  I'm really delighted to introduce this Lightning Talk called A Global Framework to AI Transparency.

     I want to introduce Ivana Bartoletti, who is going to be speaking to us today.  She is a Global Data Privacy Officer at WIPRO, co‑founder of Women Leading in AI, and she's author of an Artificial Revolution on Power, Politics, and AI.

     So, Ivana, if you can hear us okay, take it away. 

     >> IVANA BARTOLETTI:  Thank you so much, Emma.  I wish I could be there with you in person.  It is absolutely great to be talking about a topic which is, I believe, absolutely crucial in the time we live in.

     So, the reason why this call is ‑‑ this talk is called an international framework for transparency is because of what is happening in the world right now around technology and the intersection between technology, the law, human rights, in particular and the right of individuals to understand and to control what happens to their information, whether it’s their personal data, whether more sensitive and intimate data about themselves.  And I strongly believe that we are facing a short policy window that is now opening and won't be open for long where we are going to identify various way to contain and drive the consequences and some of them are undesirable consequences of these big tech age.

     I'm talking as women leading an AI and equality now have launched a global campaign on looking at how we upscale and update the frameworks that we have now around human rights and so that we can fully apply human rights in the digital age so that we can feel safe and secure in the digital ecosystem we are navigating, and this is very important as a distinction between digital and non‑digital continuous to blur, and this is going to be even further with technologies like artificial intelligence, immersive technology guess, and the metaverse.  And there is a lot of talk at the moment at round this metaverse which would ultimately bring down the distinction between the digital life and the physical life we all live in.

     So, we've got to start by dispelling some myths, and I think this is really important that we do it in this context, which is the Internet Governance Forum dispel some myths that I underpin our understanding and policies around technology over the last decades.

     The first one is to understand the data is not a natural phenomenon.  It's not like mushrooms that we can find in the forest.  We do not find data.  We create it.  Data collection implies data creation.  And this is really important, because one of the key things that we've been driven on and everywhere around the world is this idea that data is somehow a natural resource which is out there, and a natural resource that we can tap in, because it's just available, and we can just take it and use it.  Reality is very different from that.  Data requires a process of creation and this process of creation is something that we as individuals have a choice on and Governments and countries and authorities have a choice on, and we'll come to explain how that ‑‑ what that means in terms of transparency.

     The second is that data is not neutral.  Data mirrors society as it is inputting data in an algorithm to make decisions means that we perpetuate and crystallize society.  We're not evolving it.  Using data to inform policy making may be misleading if, for example, we're lacking a critical approach.

     And it is dangerous, increasingly so, as algorithms edit the world that we see.  For example, when we browse the Internet, we all see a different news or different items, because what we see is often the product of data clustering is often the product of a tight control and tight text wrapped around our browsing history, what we look at, what we see, and this data, or what is wrapped around us, is the ‑‑ all these controls are that this profiling of us is allowed by the digital architecture that we operate into, but it also means that algorithms have got this capacity to edit the reality that we see, and this raises a lot of issues, because obviously the process of data clustering means that we get individuals together and we target these individuals based on specific characteristics.

     Algorithms allocate and increasingly so around the world we see how algorithms have the function to make decisions, whether individuals can receive something or not, and this is an allocation feature of the algorithms that we are creating.

     So, with algorithms, editing, allocating, shaping the world that we see, we've got to really understand this concept of non‑neutrality of data, and therefore, the idea that it is extremely valuable.  It can bring enormous benefit to what we do, and we've got to have a critical approach to it, and this is absolutely important.

     The other thing that we need to value and understand is that matter of the technology that we've built into so far is really been used to surveil, because data can tell a lot about us.  It can tell, for example, whether we're thinking about changing jobs, it can tell us about whether we're planning to have children, or it can tell us about whether we might be thinking of divorcing or whether we may be thinking of having an abortion.  What happened, for example, in the United States recently shows our location data is instrumental to then check what women do and if they cross another state to go and receive an abortion.  So, data can tell us a lot about us.

     And while this kind of positive impact because obviously it can use for planning, policy, but at the same time, it can also expose to something which is not pleasant and especially for the most vulnerable in society, including women.

     So, we need to really define what is transparency in the age of AI of Big Data and algorithmically decision making.  The more we understand what it means to be transparent to people about the positive, the negative, the risks, and how people can, when they understand the power of this technology, they can also empower themselves to challenge this power.  That is really important.

     I believe that there are different ways that we can embrace transparency.  One is transparency as co‑determination.  Where we are creators and where we are not.  So, what is absolutely important where to get to include ‑‑ to be transparent and to make transparent decisions around where individuals become creators of data, when they are not creator of data.

     There is some data that we better not is great.  There is some information that is better not to store.  And there are some experiences that are better left unrecorded.  What are these?  How are we going to define them?  Of course, they will change across cultures.

     For example, having lived in the previously ‑‑ for a very long time, I do understand that privacy is recognized almost everywhere or at least acknowledged to some people in their autonomy and determination; nevertheless, privacy is also interpreted, operated, and lived very differently across the globe.  This is important.

     What is also important, I believe, is to recognize how when it comes to transparency we ought to get together to understand, and that can happen in our communities, that can happen through civic participation, and that is when transparency comes in.  Which elements are we going to ‑‑ apologies.  Which data are we going to create?  When are we going to become creators of it?  This is the decision that I think is at the core of transparency and how are we navigating individuals, decisions around where they become. 

     >> EMMA Gibson:  Ivana, your sound has gotten quieter. 

     >> IVAN BARTOLETTI:  Drop in my voice.

     So, transparent than see No. 1 is, transparency around where individuals become co‑creators of data.  How are we going to do this in a way which is participatory, in a way that involves the most vulnerable able in our society, in a way where people can get together in different ways to really understand what is the value that it brings, whether there is a value being generous about sharing the data and about creating data and creating whether there is ‑‑ whether there is a value as a group, as a society around this data generosity, which is also a concept that the European Union has put at the heart of the data governance act, which I believe is very, very important as a concept, the idea that privacy is a common good and therefore, there is some generosity around sharing data, because we can improve the life of many.

     But the idea that we decide together where we are co‑creators is really important.

     The other thing is second principles around transparency in my view is the link between transparency and equality.

     When we talk about transparency, which is probably one of the most used words at the moment, when we talk about [audio cut out]. 

     >> EMMA GIBSON:  You have gone a bit quiet again, Ivana.  I don't know why.

     >> IVANA BARTOLETTI:  Is that better now? 

     >> EMMA GIBSON:  I think it was because you were getting a bit too far away from your laptop mic.  Yeah.  And your enthusiasm for this topic.

     >> IVANA BARTOLETTI:  So, when we talk about transparency and equality, that is really crucial in my view, because traps parent tee is this topic that everyone is talking about when we talk about algorithmically transparency.  We are somehow putting forward the idea that individuals should be able to understand what an algorithm is, how it operates.  We are somehow talking about transparency in a way that is not necessarily meaningful, and that is because some people may lack the understanding around transparency, around technology and how decisions can be made algorithmically, but also, because a lot of technology and a lot of AI is deployed in a very opaque way, in a way that is very difficult for people to grasp, to understand, to really feel that ‑‑ and if they don't understand it, they can't challenge.

     Over the last few years, we've seen an amazing movement led by, in particularly, women, in particular women of color, and women who have brought forward the topic of intersectional discrimination when it comes to algorithmically decision making, and women who have brought to the fore how artificial intelligence used in algorithmically decision making can end up automating existing inequalities and scaling up, scaling them up in a way that is then very difficult to ‑‑ for us to understand.

     In cases around algorithmically discrimination have been brought to the public knowledge in a way that wasn't like this before.  And to be fair, countries across the world have taken action to showcase and to understand and to show people that this algorithmically discrimination can happen, and we've got to address it; however, the way sometime we talk about transparency is really transparency about explaining and showing what the technology is capable of, but that transparency may not be meaningful to so many people, so the second link that we need to look at, the second pillar of transparency has to be us around equality.

     So, the question that we need to ask is:  Transparency for whom?  This is where that link between transparency and equality becomes important.

     So, every time, for example, companies are putting together AI or technology, every time that a company is purchase go AI or technology, every time that a Government is using a system, that that question needs to be addressed:  How are we going to be transparent as users, but how their transparency is going to be transparency that works for the most vulnerable in our society.  So, that link between transparency and equality is absolutely important, as much as I've been advocating for a long time now about privacy being linked to equality, and asking the question, this privacy for whom.

     The other is transparency ‑‑

     >> EMMA GIBSON:  You're a bit further from your laptop again. 

     >> IVANA BARTOLETTI:  Political leadership from trademark and seals that can be displayed and they can be agreed on and they can be used at a global level.

     This, again, I believe, is important.  So, what does it mean for these systems when they are deployed in our real world?  What does it mean to ensure that there are bodies that are able to audit the system.  They are able to have trademarks.  They are able to have seals to show that they've been undertaken an audit and they do that on a regular basis.

     I am fully aware that legislation around ‑‑ is proliferating around algorithmically decision making and around AI.  The European AI act is very much risk‑based, and very much focuses on algorithmically decision making ‑‑ sorry, on AI that is high risk and by high risk it means that sufficient intelligence that can have an impact on the fundamental rights of individuals as enshrined in the European charter.

     Nevertheless, the European AI act, as much as other legislation that we've seen that has been discussed around artificial intelligence, I'm thinking about Brazil, I'm thinking about legislation like the a little rhythmically accountability act in the US, and a lot of development around all this, I'm thinking about the Council of Europe, and I'm thinking about the journey that is happening around the digital compact.

     So, what is really, really important is to really encourage governmental institutions to validate systems and to validate, to push for a more global way of verifying this so that individuals can develop that trust not in the technology, I don't believe that individuals should have trust in their technology itself.  They should have trust in authorities that have been able to verify that these systems have been checked against, have been audited, but not just once, but monitored for against human rights, legislation, privacy, consumer law, everybody's input.

     Another key tenant for transparency in my regard, in my opinion, is regarding a better allocation of the burden of proof when it comes to algorithm discrimination.  This is not a complex area, but I think is very, very important that is something that I would very much like to see in the discussions in countries when it comes to potential discrimination that arises from algorithmically systems and algorithmically decision making.

     So, I'm very well aware that many countries have discrimination laws that are different, it varies across the world, and they're very much based on characteristics that different countries identify as characteristics for grounds of discrimination.  Some countries it could be gender, it could be membership of trade union, it can be really just affiliation, you know, what is happening across ‑‑ what is recognized in different systems of ground of potential discrimination, however, when it comes to algorithmically decision making, it's very difficult to be able to prove that discrimination has happened, because discrimination in AI happens in a way which is very different from traditional discrimination.

     So, for example, an individual can be discriminated because they visit a particular website when that website acts as a proxy to something else.  It may reveal something else about the person.

     Algorithmically and Big Data analytics are able to cluster individuals and bring them in certain groups according to patterns that we, people, cannot even see, and discriminate on the basis of those patterns.  These patterns may be a proxy for some other sort of discrimination ground.

     What we mustn't avoid, in my view, is to leave individuals having to fight for the right and for the proof that they've been discriminated against.  What we need to ensure is that there is a recognition on what we were talking about.  The fact that algorithms may be not transparent, the fact that algorithms may be not transparent for the most vulnerable in society, but I do believe the policymakers need to think about, especially for less transparent algorithms, especially for less transparent AI, what risk AI, or how we revisit the burden of the proof in the digital age and how we balance that a bit more so that individuals, especially in certain cases, are not left to their own ability to challenge a decision made by an algorithm.  So, if there is an issue, then it in some cases, in particular, it ought to be the responsibility of what produced that system to demonstrate that that system is fair.

     So, the rebalancing of the proof is, in my view, one of the key as says and the key tenants of a more transparent approach to the deployment of artificial intelligence.

     The last element for me is important, transparency in with regards to human rights.  I believe and strongly believe that artificial intelligence has huge opportunity for the world, and the fact that women leading A itchy quality now and many other organizations, the fact that we talk about this, the fact that we are so passionate about bringing in the right guardrails and the right controls is not because we fear this technology, because we don't love this technology, because we like it and love it so much that we want it to work well, and to be a catalyzer, to be a driver for bettering our society, improving the way we do things, improving well‑being, improving growth, reduce pollution, but in order to do that, we need to make sure that the right guardrails are in place.

     International cooperation, especially at the time we live in, which is the time of the vision, which is a time of conflict, which is also a time where data and AI is very much of a political issue.  So, what is really important is that we are the Government's institution when deploying these systems, when creating these systems, they were able to perform an impact on what that system is in relation to the world.  How is it going to change the world?  How is it going to make it better?  This is why processes like the human rights impact assessment that the Council of Europe is bringing forward for example, Governments are bringing forward, Canada is, for example, bringing forward human rights assessments and algorithmically human impact when they are used by public sector, are very important, and they are focused on how a particular system impacts on human rights, but that may not be enough.  It was AI also impact society in a much greater scale has an ethical considerations than need to be made.

     So, to me, the last tenant of transparency is around how organizations, how countries, how Governments, particularly in certain areas are going to mandate and make available impact assessments that bring together human rights, strong focus on gender and intersection Al lit tees, but also impact on the environment, impact on society in the longer term and they're going to make these available to individuals and citizens.

     So, I believe that these are the main assets that we need to bring together when it comes to artificial intelligence and technology and in particular, when it comes to systems like algorithmically decision making that can have a big impact on individual.

     Transparency is very much of a word that is used as a life safer, we say the system is transparency and automatically people will understand it, but I very much believe in forums like this, I very much believe in national institutions and places where we can define together what it means to be transparent.  Transparent which is inevitably linked to equality, inevitably linked to those who are going to be possibly most impacted by these technologies themselves.

     So, we as women leading in AI and equality now, we will be championing and researching and working on these issues moving forward.  We have a window, I believe, for the next few years where we can really get a grip on all these technologies and define transparency moving forward, and I believe that if we do not define what is transparency now and have the right policies in place to deal with it, then I believe that it may be too late, and we'll miss an opportunity for people to trust innovation, to trust the ‑‑ and to really create and develop and implement technologies that will benefit the world as a whole.

     >> EMMA GIBSON:  Ivana, thank you so much.  That was really interesting.

     I think we have about one minute or so for potential questions.  Are there any questions for anyone in the room?  I think you have answered some of the questions from somebody in the room who is going to pose about how we challenge algorithmically discrimination decisions.

     Is there anybody ‑‑ are there any questions in the chat?  I'm just asking our tech Guy.  No questions in the chat.

     Ivana, while we have just got one more minute left, I wonder if you can just talk a little bit more about the particular challenges that you see with the metaverse, which is something that is being talked about at this conference interestingly, there are people here, including myself, who are really concerned about human rights in the metaverse, how are they going to be applied, and there are people here especially from more Developing Countries who say we just want to get a foot in the metaverse, nobody is interested in bringing the metaverse to my Country, and how are we going to get the economic benefits from the metaverse.  So, very different kind of views at this conference that I'm hearing about the potential for the metaverse for good things and not so good things.

     I'm just wondering in the last minute if you can just say a few words about that.

     >> IVANA BARTOLETTI:  The metaverse can bring a lot of opportunities.  I mean, and it's important that we recognize this.  And every time that we talk about technology, especially about the metaverse, we got to avoid thinking that, you know, because we live in a particular part of the world we have a particular approach to ‑‑ because we have our own history, but in reality the metaverse is hampering a lot of opportunities.  Think about ‑‑ for example.  Also thinking about the obvious of the future and I'm also thinking about people being able to do things and try things that maybe, you know, when they're more remote areas they wouldn't be able to do that.

     I mean, we've got to look at this with an open‑mind, and without thinking that because we live in a certain part of the world and we experience things in a certain way then, you know, there is just ‑‑ we don't want it, or we need to be able to really understand that there are opportunities within this so long as we do things right.  The metaverse does bring challenges, does bring challenges around which legislation applies.  Can I be as a woman, can I be raped in the metaverse, and if so, who is going to protect me, but also, am I going to be bombarded with digital advertising, because so much information about me is going to be available, an Avatar, ultimately, is me, is me in that world.  But also, how are we going to create a metaverse which is not, once again, owned and is only the big tech as was before.  Do we need to rely on the big platforms to do this, or can it be a place where we experience a more decentralized web as we want to do.

     I'm working very much at the moment on all these challenges and trying to look at the design of metaverse products, which platforms is going to be used, and we will be working on this as part of our campaign, Emma, for women lead anything AI and equality now and encourage anyone to get in touch.  This is very much a Field where we need to work together.  The metaverse is not one technology; it is the coming together of immersive tech, blockchain, crypto, artificial intelligence.  There is so much that comes together on this.  At the moment there is not much thinking about regulation, not much thinking about how we govern that, but again, this is where we've got a policy window, and we may not need legislation, we just may need to understand how all these technologies interlock together in a virtual world, but this is where the debate has to happen now and this is where it would be really wonderful to have in the sort of campaign that we would be running over the next few years, it would be really good to bring perspectives coming from all parts of the world.

     >> EMMA GIBSON:  Thank you so much, Ivana.  We're out of time now, and I really appreciate.  I know that you're in a very different time zone to us, and I really appreciate you making time in your busy schedule to come and talk to us about this topic, which is being recorded so people who aren't here today can watch it still online.

     So, thank you very much.

     >> IVANA BARTOLETTI:  Thank you.