IGF 2022 Day 4 Town Hall #37 Beyond the opacity excuse: AI transparency and communities

The following are the outputs of the captioning taken during an IGF intervention. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid, but should not be treated as an authoritative record.



>> MARIEL SOUSA: Hello, good afternoon.  Both for those online and in person, we are having a few more minutes to prepare everything and have more people online with us.  Thank you.


>> MARIEL SOUSA: Is this working?  Hello?

>> Yeah, you just press it.

>> MARIEL SOUSA: Hi.  Welcome, everyone.  We are still waiting on one panelist.  He might join us later on.  But I think we are just going to start our panel for the time’s sake, since we have one hour.

Welcome to our panel "Beyond the opacity excuse:  AI transparency and communities."  My name is Mariel Sousa.  I'm a Policy Advisor at iRights Lab, a think tank for the digital world.  We are located in Berlin, Germany, and I'm here with my colleague, Jose.

>> JOSE RENATO: Hello, everyone.  Hello.  You listen to me?  Perfect.

Yeah.  Really nice to meet you.  There's a hope that was in mind here.  My name is Jose Renato.  I'm the founder of LAPIN, which is a Brazilian think tank based in Brazil, the capital.  And, yes, we are doing this panel together.  And I conduct the research at the iRights Lab.  It is pleasure to be with you here.

This conversation we are going to have and we expect really it is a conversation indeed and not something that's super formal and tight in its strict format, but that you engage with us and share your views.  But it came after the perception of us that every time that we talked about AI transparency, (?) transparency (?) transparency.  We ended up being too much superficial.  And we realized that maybe this is a time for us to really think of things that are more concrete to make -- especially to make effective communities understand better the systems in which they are engaging and affecting them to a large extent and also to empower them to understand how to question their outcomes and, yeah, really have more strength when dealing with them.

Some of the questions we may aim to address is really what is the importance of transparency, to what degree it should be a solution or part of the solution or should we think differently.

And in this sense, understand how should the communities be informed, what information is necessary for that, and would that be dependent on the technology, on the kind of system or not.  So, maybe these are questions, issues that we might leave the room here.  Not exactly with concrete answers with that, but at least some degree of more information when -- yeah, more mature debate of that area.

And I think Miya can talk a little bit about how the panel will be, how will be the methodology, so on and so forth.

>> MODERATOR: Yeah, we have one hour to discuss all these questions and we are very happy, Abeba, you are joining us in person.  You're from Ethiopia and from the UK. 

We have with us -- and England and Ireland.

We have with us two online panelists at the moment, Yen-Chia Hsu, from Thailand and the Netherlands.  Hi.  Nice you're here with us.

Nina da Hora from Brazil. 

And we are still waiting on Prateek Sibal from India and France, who I hope will join us later on in the session.

All of our panelists will now give us insights into their work, into the specific communities that they are working with that are affected by AI transparency.  And we want this panel to be designed in an open way.  We want to invite everyone to share their perspectives and comments.  So, please feel free to raise your hand and ask your question.  And also, the panelists, feel free to also comment on each other and engage in a discussion with each other.  We are really interested in what you have to say and comment on each other's points of views.

So, let me start with introducing our panelists more in detail.  Abeba Birhane, you are a cognitive scientist, you do interdisciplinary research on human behavior, social systems and responsible and ethical artificial intelligence.  Your interdisciplinary research explores various broad themes embodied cognitive science, machine learning, complexity science and theory of decommonality.  And your work includes audits of computational models and large-scale datasets.  You are a senior fellow and trustworthy AI at Mosul Foundation, and Adjunct Assistant Professor at the Computer school of Science University, Dublin.

Also with us today is Yen-Chia Hsu, Assistant Professor at the MultiX Group at the Informatics Institute University of Amsterdam.  You focus on studying how technology can support citizens' operation, public engagement, citizens science and community empowerment.

And also, in your Ph.D. in robotics, you conducted research on how technology can empower local communities that are tackling the issue of air pollution.

>> JOSE RENATO: I would like to start with Prateek Sibal, which is not, unfortunately, here.  But we are expecting him to join soon.  A programme specialist at the digital information and transformation section of the communication and information sector at UNESCO, and in his framework, he coordinates the sector's work on artificial intelligence and digital transformation.  His work spans research, advocacy, policy advice and capacity building for the governance of digital technologies.

And well for (?) and with a specific from my part introducing another Brazilian, Nina da Hora, with whom I have already been so much in touch discussing policy in Brazil.  She is a 27-year-old scientist in the making, as she has identified herself and an antiracist hacker.

Holds a BA in computer science from (?) and researches justice and ethics in AI.  Technology review Brazil and part of the security provider board of TikTok Brazil, and for the transparency board of the 2022 Brazilian election created by the supreme electoral court.  That was a huge job for you, Nina, because the elections were quite challenging.

She has recently joined the thought works team as a domain specialist.  Welcome to all of you.  Glad to have you here.  Thank you for accepting this invitation.

Mariel, it's for you to start the panel discussion with the first question.

>> MARIEL SOUSA: As we said before, where we talked before this panel started, we wanted to get into a bit more into detail and discuss concrete technology, practical takeaways today and for this we prepared some questions.  We will start with some general questions right around your work, what type of AI you work with, what type of transparency and which type of communities you work with to then later get a bit more into detail into policy and into practical implementations that we can derive from the discussion.

So, maybe, Abeba, you want to start, to please describe what are you working with regarding AI, how is AI part of your work, which communities are you working with.  Maybe give us some insights.

>> ABEBA BIRHANE: Hi, everyone.  Thanks for being here.  Thanks for coming along, even though this is the very last session.  Thanks for everyone watching this online as well.

So, I do a lot of work, I have various projects.  I guess one of my projects that is close to AI is auditing.  Auditing is something new, something that's emerging, especially over the last five years.  It's a process of vetting algorithmic systems, large scale, assessing them, asking questions such as are they representative, are they accurate, are they functional, similar questions like that.  So, that general idea, that general process is called auditing.

So, as one of my projects, I do audits of both algorithmic systems and large-scale data assets.

As to communities I work with, it's difficult to identity one community I work with, because I don't really work with any community per se.  But a lot of my work comes from the desire to bring about justice, to various groups, you know, whether it's within Europe, whether it's in Africa or Ethiopia or whatever.  It comes from the desire to audit systems or to ask various questions from the perspective of communities that are mostly impacted by systems.  So, it's not one community per se I work with.  But I will say that I acting to adopt the perspective of people at the margins.  And, yeah.  I guess I will leave it at that for now.

>> MARIEL SOUSA: Thank you.

Yen-Chia, would you like to continue?

>> YEN-CHIA HSU: Yeah.  Hello.  Can you hear my voice?  Is that okay?  All right.  Cool.

I am Yen-Chia.  My background is a combination of design and computer science.  So, I, actually, did my bachelor's in architecture design and then later interested in computer science so later I come to the University -- the Carnegie Mellon University, I study interactive computings there, and later I got interested in the idea of how robotics technologies interact with people.  During that time, I joined my lab, which was the create lab in Carnegie Mellon.  My lab has a kind of close connections with local communities for long time.  And I just happened to work in the area.  It's about using technologies to empower people, to provide evidenced of air pollutions.

I met, actually, my supervisor in one of the courses, it's called human robot interactions and I was interested in the idea about how to build systems and technologies around social issues, because that's also pretty related to my background in architecture design.

So, I began to work with communities at around 2014, so I go to community meetings to understand their requirements, their need of Compacting air pollutions.  The city is in Pittsburgh.  So the context is Pittsburgh has been suffering from air pollution for a long time.  This community specifically is about 80,000 people.  That there was a Coke refinery that constantly producing industry emissions that affect this community.  The communities, they want to, kind of, collect evidences to advocate for better -- to improving air qualities and probably lead to attitude changes in regulators, lead to policy changes.  That's what they want.

So, we work together with them setting up cameras and sensors and then we set up an internet network and infrastructures there.  Then we help them work together co-designed in building a system that allows and enables people to tell stories of air pollutions.

And eventually they presented in a town hall meeting with their local regulators and the local governments to actually change their attitude.

So, this is one example of my previous work.  And in general, I work on using technology to solve problems and empower all the people.

>> MARIEL SOUSA: Thank you very much.  And our last panelist is Nina.  I can't see you at the moment but maybe I can hear you?

>> NINA DA HORA: Yeah.  Hi, everyone.  Good morning or good afternoon there.  I don't know.  Here it's good morning, Brazil.  It's a pleasure for me to share with you some insights about in this panel.  And firstly, I needed to share, it's a pleasure to be here with Abeba, because is one of the reference for me in this area, in this debate.  So, when I started to start as an AI colonization off algorithms, Abeba was one of the first people that I read the articles and that I follow the insights.  So, thanks for inviting me for this panel.

I am computer scientist.  So, my background is algorithm and AI.  I started in AI era criticizing the project I was participating in 2018.  I was in -- (?) I was a developer of the -- a developer in the team about what recognition and image process.  So, I started the computer videos and the other tools in AI.  But in my first project in Easter tab, I had many problems with voice recognition in interest people, facial analysis in black people.  So, I reported it to start up these problems.  And the people doesn't have idea how to -- how these -- how the solution was possible.

And for me, was very complicated because my colleagues doesn't see recognition these problems with the main -- with the main mind.  And I don't know how I explain this.  But the major of the people was white and a black/white male on my team.  And when I reported these problems, I don't have the -- I don't have the -- didn't have the activity solution this before they choose in the society in Brazil.  So, I decided to go out startup and started to search (?) AI and the problems in computer visual.  My last research was about how facial recognition impacted Brazil, Rio De Janeiro, Bia, some states from here and how facial recognition is totally negative to black people community.

So, I am using my computer science background and my background in black movements here in Brazil to start this, some initiatives to discuss with civil society and government how to mitigate this problem here and is possible in the world.

So, the context in Brazil is very complicated now.  We are in a problematic government.  So, like Jose said, I am trying to help him in Brazil to construct or help construct a democratic society.

So, I am using AI and data science and many others backgrounds in social science to connect to more the civil society in this discussion here.  So, thank you.

>> JOSE RENATO: Thank you very much.  If we can move forward now and going into the role of transparency and all of the works that you guys are doing and which are very different between each other, of course.  And maybe I would like to start with Yen-Chia and please tell me if I'm mispronouncing your name, my friend.

So, yeah, I would like to hear from you, like within your project, like, you said you worked closely to communities and solutions toward pollution.  And I wonder if transparency was a part of this process as a whole.  Like if you saw it as an effective tool to, I don't know, eventual pitfalls that could come from this system or that maybe you could see in other systems that are -- that you may have worked with or that you, I don't know, hear about, study about.

And if so, like, what was the role of transparency?  How did you design it or how do you think it should be designed and put into practice?  Thank you.

>> YEN-CHIA HSU: My name pronunciation is correct.  Thanks.

So --

>> JOSE RENATO: Great.

>> YEN-CHIA HSU: Yeah, the AI transparency part, actually, when I was developing the tools with people, yeah -- well, maybe I should plane the AI purpose.  The part that I was doing was, actually, using automatic computer vision algorithms to help communities.  But industrial pollutions from the cameras, because we have cameras monitoring facilities that emit emissions.  And these are like black plumes or brown plumes or sometimes bluish plumes.

And then we actually use the algorithm, but this one challenge that we face is we need to allow people to use it in some way so that they can create video clips and then they can present that to the government and to the public.

So, what we use as an approach is we designed the entire system with local people.  So, we put everyone in the loop.  At our table we have the core community people around, like they have monthly meetings.  And each meetings has around 20 to 30 people.  And we regularly go through the meetings and talk to them to design the systems.

And we tell them, actually, there is a way that we can automate this process, because they used to look at the videos and then they, kind of, take screen shots by themselves, and we just tell them, oh, actually, now there's ways that we can automate something like this.

But the underlying model, I just tell them, this is some kind of magic thing that runs in the background that can, kind of, similar to what humans are doing in finding all the smokes based on the colors, based on how it moves.  So, basically, that's how I described that to citizens or people who don't really have machine learning knowledges.

And then I showed them with this tool, that it can produce a bunch of video clips that you can make use of them.  But they are not perfect because they are going to make some mistakes.

So, we have some user interfaces that people can manually -- they can look at all the output from the AI model and they can pick the one that they want.  And then they later take that and then they present that in town hall meetings or send in emails with other people.

So, in terms of transparency, I think transparency would be different in every context and also in every different applications.  In my specific context and applications, I didn't push it like intentionally a lot, but, instead, I try to keep people in the loop so that they know what's happening, like, what is the input of this AI model, like the input is the monitor, camera, videos.  And what's the output of the model?  It's like a bunch of clips that people can use to do something.

So, to me, it's -- I explain it by myself, but the AI by itself or the AI model is not really transparent.  So, that is, actually, a hard question about how to design it.  I mean, I know some applications probably need AI transparent -- you probably need AI to be very transparent.  But usually in my cases, the communities, they see the AI model as a tool that can create and generate some kind of outcome that can help them achieve something or approve their internal hypothesis. 

In this case I tried to make the entire system useful to them.  But not to very educate for having the AI model be understandable by people.  Yeah.

>> JOSE RENATO: Okay.  Thank you very much, Yen-Chia.

Maybe now, Nina, if you would like to share some thoughts.  You have said and I know that you worked a lot with face recognition in Brazil which has so many issues and everywhere in the world, but you also starting other projects.

What would you think, like, would be the -- like, how would be ideally transparency?  To what degrees it helpful?  What are your thoughts on that?

>> NINA DA HORA: Okay.  That's expressive for face recognition for me it's difficult to think because I really think that these technologies very problematic to introduce in society.  Here in Brazil, we have many problems with surveillance from the government, from the policy, I think policy in English is the same in Portuguese.

But in the other technologies in AI, and I think the first thing is about how to explain the transparency to civil society.  Because we have -- we have many ideas about transparency.  We participate, we are very, very intimate with this discussion about transparency in AI, transparency in data.  But the civil society doesn't understand these ideas for transparency.  For me it is important to explain more about transparency and explaining more how to have many levers in transparency.  Many levers.  Transparency in AI models, transparency for data privacy, transparency in the public algorithms from the companies.

And the second thing is about how the companies understand transparency.  Because the transparency is not connect with the APIs or technology documents or only algorithms, how to explain the algorithms.

I think in the companies needs to responsible and the regulation that I participate in this moment in Brazil have some ideas to introduce the companies in this regulations.  So, regulation AI, for example, is not a regulation -- is not a regulation in the choose of AI.  The regulation is in how the company use choose AI to serve civil society here.

I think we need to divide more the levers about transparency because it's not -- it's not have only one idea about transparency.  Have many other ideas and have a civil society that needs to participate in this debate.

So, I have a hacker mind.  I don't believe the regulation from -- only from government.  I think the civil society needs to participate more.  And civil society that I say does not more researchers, it's not more computer scientists.  It's about my mom.  It's about my grandmother.  It's about the (?) the black movements, the indigenous movements needs to participate more about this debate.

Because these people understand how the transparency can be a problem if not have the correct (?)

>> JOSE RENATO: Thanks, Nina.  A bunch of questions came to my head after you talked then.  I am really hoping that I am sharing this with other people.

Well, Abeba, you mentioned -- well, you were thinking of many communities at the same time, right?  But your work also relate a lot to the colonial AI and so on, so forth.

And I wonder if you have thoughts on how would transparency, if it would be different to the population in general of a country in the so-called Global North and how would it be in a post-colonial country, somewhere from the majority world, we are also calling it.  Would you have thoughts on that?

>> ABEBA BIRHANE: Yeah.  So, I will just add to Yen-Chia and Nina's point and I will complicate the understanding of transparency a little bit rather than clearing things up.  Because, unfortunately, it is a bit messy.

So, on the one hand, transparency is good.  It's something we all should aspire for.  But as Yen-Chia specifically mentioned, transparency is not always possible, especially when you are working with missionary models, pause you don't know the engineers themselves who are designing the systems, just don't know, you know, how certain -- why certain datas are clustered, why certain decisions are made, why, you know, algorithmic systems give us certain outputs.  We don't know.  Because these systems are black books by nature.  Not all of them, but some of them.

And there is a great movement for, you know, transparency, transparent AI, explainable AI, open sourcing.  These are all related.  And to some extent, these are all good.  These are good movements.  And to put it in context, for example, I work with datasets to, kind of, explain this from the front of datasets.  I work with image assets.  Take a look at some of the major, large-scale datasets, by the way, large-scale datasets are the back bones of AI systems.  We have had, you know, various AI meters and techniques since the 1960s and '80s.  It's only the emergence of large-scale datasets that, you know, over the last 10, 15 years that has made AI very fashionable, very involved, because over the last 15 years we have had the internet where we can gather huge volumes of datasets.  So, datasets really with critical for AI systems.

So, if we think of transparency and explainability and open sourcing in terms of large-scale datasets, on the one hand, we can audit them, we can scrutinize them, we can vet them if they are open sourced.  And if there is some kind of transparency around them.

And if you look at some of the major datasets that are used for state-of-the-art, you know, vision systems, language models take datasets used by Google, gigantic, large-scale models, take datasets used, for example, by open AI, despite the name, open AI, you know, was the datasets and UN the algorithms and the weight and so on are not open sourced.

So, on the one hand, these systems really impact huge population, you know, large numbers of people.  But because they are not open sourced and because there is no transparency by design, it's difficult to -- for external auditors to scrutinize them.

Coming from that point of view, transparency is good and open sourcing is good.  But, again, to muddy the water again and to get back to your second question about impacted communities, even if we can, you know, make algorithmic systems transparent, even if we can know the kind of steps that are taken to make certain decisions, even if we can make AI systems explainable, that does not necessarily mean, that does not necessarily lead to just systems.  Explainability and transparency and open sourcing do not necessarily entail that.  You are pushing for just systems.  And I work in the audit space.  Sometimes you -- you can reach so many academic papers on explainability and transparency where people just like, here, we looked at this system, we assessed this system.  Here is the results.  That's it.

There is no accountability, there's no escape, there is no like how does this impact -- what does this mean for algorithmic systems?  What does this mean for decision making?  What does this mean for impacting communities.

Yeah, I'm muddying the water and trying to show both the ups and -- the positives and the limitations of transparency and explainability in open sourcing.

>> MARIEL SOUSA: Thank you so much, Abeba.  Very interesting -- actually, our third question was about challenges, we are getting into this already a little bit.  But before we dig into it a bit deeper, yeah, we wanted to open the panel and ask you about -- do you have any questions?  Do you have any experience you would like to share?  Some comments that you have on the input that was given up until now?


>> AUDIENCE: I have a question.  I don't work in the area of AI.  And I feel Nina presented a tangible problem, namely, that face recognition is optimized towards, yeah, white faces and it doesn't work for people, yeah, who are not white.  And I was wondering how exactly does transparency contribute to solving that problem, and which other conditions need to be in place?  What do we do if we have biased datasets.  Yeah.  If someone can explain how exactly, like, each steps would look like from having a flawed system that leads to a bad outcome and how does transparency improve on that?

>> ABEBA BIRHANE: Maybe I will let Nina take this on, because she talked about transparency.  I mean, facial recognition systems.  Sorry.

>> JOSE RENATO: Add a little bit of spice on this question.  In case you think that transparency is not the answer or at least that it's not enough and you guys have already thoughts about what can be added upon that, please feel free to do so as well.  Thank you.  Just a little bit of spice and confusion to the question (chuckles).

>> NINA DA HORA: Okay.  Yeah.  If I understood the question, I think transparency in facial recognition is different.  Here in Brazil, in two steps, the first step that I said about not recognize black faces, and the second steps is when recognizing black faces, associate black faces with violence.  So, some friends, some black friends have this problem.  So, we had two steps in this use.

So, I think that transparency with images from the other people includes some sides about how those collect, how this images is processed, how these images is used.

So, I think transparency needs to divide these three steps, collect process and the use of these images.  Before think in the technology.

For me, face recognition is totally at this moment, I think, totally problematic.  So, sorry.  I don't have positive insights about this technology in this use.

So, transparency to other technologies that use images like apps, like the security of your iPhone or your telephone, your Smartphone use the face recognition, the analysis, the facial analysis.

I think the first step is include in the terms of privacy -- sorry, Jose, I don't believe in terms of privacy because it's difficult to read.  But if they choose that, we have now.  So, in the terms of the privacy, don't have the -- explain about how use your images, how these companies will share -- how these companies will share these images.  I think it this step is in important in transparency. 

And the other step about technologies that use images, visuals, the other data is about explaining, explainable.  I need more time to elaborate this argument.  So, I will finish because we need to follow the panel.

But in the finish, I will explain more about the second step.

>> JOSE RENATO: Thanks, Nina.  And just to make it clear, I don't trust you in terms of privacy either.  There's always one story after the other showing that we shouldn't trust them at all.  And I also don't trust face recognition, definitely not.

Mariel, please.

>> MARIEL SOUSA: Any other questions from the audience?  You're more than welcome to participate with any comments, experiences, questions that you have.

Otherwise, I would be really interested in how -- in your research project in Pittsburgh, how you made the AI systems explainable.  How did you interact with the community?  Which feedback did you get?  Maybe you can share a little bit more about this.

>> YEN-CHIA HSU: When I am hearing, like, Nina -- sorry if I pronounce your name correctly.  Abeba.  The definition and some reflection about transparency.  And I was thinking about what I did for that.  And I think the hardest challenge that I felt is a lot of the transparency, all this practice is for on the paper or exist to date, they are for scientific work.  And it's very hard to push transparency to the citizen side.

Like, for example, I was trying to explain what these models are doing to local people.  But I can't really explain, like, how this model is done.  Can I explain these regressions to them.  Like, many people don't have technical backgrounds.  So, it's very hard to do that.  And I am not really sure how to do it.

So, the approach that I take in Pittsburgh is just instead -- it's actually built on the trust between our lab and the communities.  So, it's probably not that the communities trust how the tool or the AI thing is doing.  Like, I also have another kind of tool, like, I build a prediction system for smells.  We also have another project for collecting smell experiences for citizens.  Then we can predict in the next few hours if there is going to be some bad pollutions that happen in the local region.

And I was also trying to tell them, oh, there's a type of prediction models for that.  But it's not because they trust the model and it's not because they think the model has no problem.  I think the most important thing for that to work is because they trust us as the group of people who create models able to make it right.

So, I don't really have good answers for that.  And it's -- it's complicated.  Because I, like, for example, I could make the entire AI model explainable.  I also make the entire code open source.  But it doesn't really add, probably, a lot to the communities' point of view.  I mean from the scientific work point of view; people know how the model is built.  But probably it only allows people who know how to read the code to understand how it is built.

And we also involved local people in curating that asset so that they know the data, this is the data that they created, because we show them a bunch of emissions, video clips and then we have a tutorial to teach people how to identify smoke emissions.  And we involve local people in labeling all this data.  So, in some sense, they know that they are teaching the machines for how to recognize these emissions.

Yeah.  So, I'm not really sure if going further for -- yeah, I'm not sure how to, actually, make things transparent to local people.

To my point of view, it's more like if people are involved in the loop, if everyone can be on the table and know every stage of the process, then I feel like that's the way to go.

>> MARIEL SOUSA: All right.  Thank you.  Yeah, that's also a question that we are dealing with a lot.  How can you make it explicable to affected communities?  I think we are also very interested in hearing some practical experiences that all of you may have or may have not or questions that you have.  Abeba, maybe you have some experiences.  Maybe you can share some of your insights on this question.  How do you communicate AI systems?

>> ABEBA BIRHANE: I guess I will speak to this question in relation to audits of AI systems or large-scale datasets and as, you know, as (?) said, making the systems as transparent as possible is -- may constitute part of the solution, but it definitely is not the solution.  It definitely is not the whole solution.  It's definitely also should not be, you know, the end goal.  Because, as I said, just a little bit earlier, that you can make -- even if you can make some systems transparent, unless you recognize, you know, power asymmetries, unless you recognize the fact that the creation of these systems benefit some people, while they disproportionately negatively impact other people if we go back to facial recognition systems as Nina was talking about.

I agree with Nina's assertion that I would also struggle to find any positive use case for facial recognition systems.  So, any discussion of transparencies can be meaningless unless we acknowledge the fact that the creation of these systems, of course, they benefit the people that are, you know, developing and deploying these systems.  So, they do benefit some people, while they disproportionately negatively impact others.

So, kind of like doing some kind of explainability or transparency analysis and trying to, kind of, show to impacted stakeholders this is how these decisions are make.  This is why we get this output because of this.  As I said earlier, it's really difficult especially with large systems, it's really difficult to have -- to know why certain decisions are made, why certain outputs are produced.  Even if we can do that.

If we cannot supplement it with some kind of accountability, with some kind of, you know, value assessment, value judgment, which is that -- which is, you know, what I keep repeating, which is that, you know, these systems centralize power, they give more power to people who already have power.  So, unless these kind of transparency and explainability efforts are supplemented by the effort, for example, to move power from the most to the least impacted, kind of like outlining transparency and explainability that is, kind of, meaningless.

Again, put it in context, it's better to have some kind of processes cleared or transparent, as opposed to, like, you know, looking the datasets behind proprietary rights.  It's always in context.  So, yeah.  Just to put my answer in context.

>> MARIEL SOUSA: Yeah.  Thank you very much.  I think we have -- we now know that it's very complicated to communicate and to implement transparency.  But what about community sourced transparency?  How about -- you think, Jose, you have a question about this.

>> JOSE RENATO: Yeah.  In your response I was wondering, especially, like -- and I also invite both the audience and the panelists also tray to think of other systems as well because I'm particularly interested on face recognition and I am also suspicious about that in the sense that end up always enjoying to discuss this issue.

But I wonder, like, whether we can go beyond the -- this model, like the transparency of the systems themselves, the models themselves.  And, like, reach the transparency of policies, be them made by the government or by companies in a way that we approach transparency in a broader manner.

I was having this thought while you guys were discussing this, because sometimes -- and please help me in assessing this hypotheses.  If we had maybe different procedural transparency mechanisms that relates to procurement decisions when purchasing the systems, which leads to the capitalistic decisions that are made when designing systems, for instance, in social media.  And is there a room for such -- for such an idea, like, that we see AI transparency as also transparency related to the decisions that back the development, the choices of this -- that end up putting the systems to the public?

Would you Luke -- was it too confusing?  I don't know.

>> ABEBA BIRHANE: I think you are talking about transparency at the higher level, like transparency about decision about -- the decision to build certain models, the decision to deploy it in certain communities.

>> JOSE RENATO: Exactly.

>> ABEBA BIRHANE: Yeah, that level of, higher level transparency, as opposed to algorithmic transparency or model transparency.

Yeah.  So, I think that's a definition or that's framing of transparency is much more important than, say, making algorithmic systems transparent.  Because let me speak with an example, judge unfortunate to make it concrete.  And I know everybody is tired and it's difficult to follow these things when they are abstract.

So, just so you all don't fall asleep.  So, a colleague of mine has been working on undersea cables that are -- that are kind of laid around Southern Africa.  And she has been doing the research.  And one of the biggest bottlenecks for her was, I guess, you could think of it as lack of transparency in the process, and lack of information, lack of public information.

So, she had to dig -- she had to do a lot of detective work to find out that these cables are being, kind of, laid -- put in place.  And she also had to do a lot of digging, a lot of detective work to find out who is behind them.  And there was absolutely no transparency about -- no transparency about, you know, why these cables are being laid down and who is responsible for them and what are, like, the financial agreements and what are the other socioeconomical agreements between these big corporations.

So, these undersea cables are owned by Facebook and Amazon and Google, especially in Southern Africa.  And they really say very little and there is no community involvement.  There is no, you know -- there is no community participation.

So, thinking of transparency from that point of view, from the point of view of, like, making these processes open and involving the community, that definition of transparency is really critical because as I said, one of the big issues and the bottlenecks is just lack of access for open source or just UN information, public information.  Yeah.

I am glad we are talking about a different kind of transparency as well, yeah.

>> JOSE RENATO: Yeah, yeah.  I think it's so necessary.  A quick anecdote and don't want to monopolize this.  But I recently heard of the story of an engineer on Twitter in which there was a project for them to share data held by Twitter to telecommunications company.  I can't remember if it was in the U.S. or not.

And in the end, there was a huge discussion on Twitter like this engineer in specific, very much word about ethics, about data privacy.  And this project in specifically was laid down.  They got rid of it.

But I wonder how many of these discussions, of these potential projects are -- these decisions are being taken without us having any idea and involving, like, so much corporate power, so much governmental power.  And we -- and within this opacity, we getting so weak with regard to that.

I don't know if Nina and Yen-Chia will also like to jump in this question.  But also, I would like to open as well for everyone to share their experiences on this issue.  As we are also -- and, gosh, yeah, time is flying.  Gosh.  Time is flag, guys.  Just now realized.  I'm a terrible Moderator.  Thank God I have Mariel here.

Would someone from the audience want to jump in?  I don't know, either talking about this theme specifically or sharing other perspectives.  Please feel free to do so.

>> MARIEL SOUSA: Patricia.

>> AUDIENCE: Yes, I have a question to Nina, that she was talking before that the civil society has to engage more.  Yeah.  As I'm working with public authorities in Germany and also bringing ethical guidelines into practice.  I was wondering -- I'm from Germany or work in Germany.  And so, the question that we discuss was how to make AI systems from the state accessible and transparent for -- yeah, for the public.

And (?) said, yeah, it's really important that the public servants who work with this AI system need to know how the AI system works, to have transparency and also be able to explain the systems to the public.  And as you said, yeah, we need to make AI systems explainable to your grandmother or your mother, who are the persons or the groups that are able to explain these different AI systems to the civil society, because it's such a huge group.  Can you share any insights on that?

>> NINA DA HORA: Yeah.  Thank you for the question.  And I will contextualize because when I participate in the election, I had the possibility to understand more how to connect civil society with a big demand in Brazil, so I will share my expertise in this case in the Brazil context.

The first thing is about the government decisions.  And here in Brazil, we have many problems, like Abeba said, in Africa continent.  We have many problems with the relation between -- relationship between big techs and the government.  So, the big techs, some big techs like YouTube, Facebook, were involved in the debate with the government about regulations, about the misinformation, about how to connect more people in the Amazon region.

So, we have a problem that Facebook, Google and, I think, some company from Elon Musk, by Elon Musk, introduce internet in the Amazon region.  And we don't have the access about the data, about the documents in these decisions.

So, actually, here in Brazil we have this situation, about the internet, about the data, and about AI.  So, the first thing that I participated in the 2020, we tried to ask about this data and about these documents in the -- on 2020.  We tried this in the sense of our research, and we didn't have access about these documents, about this decision, which big techs and the Brazil government.

And after this, we created some materials and mini campaigns among our regions of Brazil.  So, I am working with the communitarians -- this word doesn't exist.  Sorry.

I work with some people from some regions in Brazil like the leader of indigenous people in Amazon region.  I work with the leader of black movement in Bellay.  It's a state from the Amazon regions.  I started producing materials and campaign offline with these community to spruce these communities in the debate.  Because I’m from Rio De Janeiro.  It's other state, it's another context in Brazil.

We do not have how to -- we don't have how to create some things without these people.  So, I created this point.  I participate two years doing this.  And in this year, we started so many campaigns against facial recognition and against MSU information on WhatsApp.

I think this is important.  To create a communication offline.  Because we are very, very, very online all day.  But these people in Brazil, indigenous people, is some black movement, doesn't have access on the internet.  So, we needed to create a way to connect with these people offline, the first thing.

The other fix is created materials is a part of the education here in Brazil, create the materials, mini campaigns, mini art shops about these themes, introduce everyone into these themes.  And this year we got it -- how can I say?  Use some translator -- I'm not using translator because -- no.  It's not possible for me.

But we elected some indigenous people and some black people for the government to create public policies about these problems without big techs.

So, in Brazil, unfortunately, we are labeled for Facebook, for Google, for YouTube, it's the same of the Google, WhatsApp.  So, there are some labels in Brazil to connect and to create this colonization of the internet.  So, it's very hard, it's very difficult.  But we created this these models in offline life.  So, this is important.  Offline life exists.  And we needed to use more, because it's more protected, more security for these people.

>> JOSE RENATO: Thanks, Nina.  Thank you very much for your thoughts.

And Mariel, would you like to --

>> MARIEL SOUSA: Yeah, it's already seven past.  So, we -- yeah, it's a bit longer than we -- the agenda says that it is.  But so I think we are going to wrap up the session soon.

>> JOSE RENATO: Yeah.  I don't know.  Maybe each of the panelists could, in a Tweet, and I mean really a Tweet, please, okay?  You guys need to stick to it.  What would you guys say about what are the key takeaways that you take from this?  What are the issues that we need to focus more.  Okay, you guys really need to stick to the Tweet.

Abeba, would you like to start?

>> ABEBA BIRHANE: Yeah, sure.  Transparency can be part of the solution, but it definitely is not all the solution.

>> MARIEL SOUSA: Thank you.

>> JOSE RENATO: I love Tweets.

Nina, please.  No.

>> NINA DA HORA: Thanks, everyone.  And I believe we need more indigenous people participate in these debates about transparency and AI.  I think diversity is important to make better decision, here in Brazil specifically.  Thank you.

>> MARIEL SOUSA: Thank you.  Our last panelist.

>> YEN-CHIA HSU: Yeah.  I think the entire system need to be willing to open to the public, but not just the AI itself.  And embrace community co-design.

>> JOSE RENATO: Yeah, they are real good Tweeters.  Thank you all.  Outstanding to have you here.  Mariel, thank you for the partnership.  Thank you, Abeba, Nina, Yen-Chia.  Have a really great rest of IGF.  And it was great to meet you all.  Thank you.

>> MARIEL SOUSA: And thank you for coming to the last panel of the entire IGF, it feels like.

>> JOSE RENATO: You guys are brave.  You guys are brave.  Thank you.

>> ABEBA BIRHANE: Thank you.

>> MARIEL SOUSA: And also, we would like to keep up the discussion.  If you are interested in staying in contact with us, we would appreciate you guys reaching out.  We work at iRights Lab, it's a digital think tank in Berlin and we would like to continue the discussion with all of you per email or Twitter.  Just reach out to us.  We are happy to engage further.

>> JOSE RENATO: Yes.  LAPIN the same, Laboratorio de Politicas Publicas e Internet, or Laboratory of Public Policy and Internet.  If you want to learn more about what's going on in Brazil, reach out to us, reach out to Nina, and, yeah, let's keep up this debate.  Thank you.