IGF 2021 – Day 0 – Event #26 Ensuring Diversity in the AI World

The following are the outputs of the captioning taken during an IGF virtual intervention. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid, but should not be treated as an authoritative record.



>> ‑‑ you is highly concentrated. We know that five countries in the world and 200 firms are producing the majority of the developments in the AI system. And the rest of the world either are users or just not connected because we see you have half of the world, there is not connected. In terms of diversity, we know that are produced in one single language. It is culture and history much of what is being produced is produced with those lenses. We know also that only 22 percent of the staff working in the development of AI senior women, and therefore diversity from the gender perspective is neither high in these technologies. 

And what I would like to advance in the conversation today, first is to get our ‑‑ hear from our speakers, how do they perceive this issue, but also, we ‑‑ we would like to advance how much of the downsides that we know the technologies bring with them is also linked to this lack of diversity. How much of the biased or the stereotypes or the ‑‑ biases or stereotypes are linked to this issue of not being very representative et cetera in the datasets or in the way we advance the algorithms or train the data, and in the way we apply the conclusions of AI. 

This is not a minor issue, because we know that certainly AI can help us to make day to day decisions, but they can also support governments to take major issues related to ‑‑ related to finals. And day by day also, the big platforms are doing or taking a lot of decisions regarding these technologies, diversity for us, for UNESCO is key to the definition of what needs to be done and the way we advance it in the recommendation, but also because we know it produces more sustainable outcomes. Just a very short comment on the recommendation, Constanza was with us, she was part of the 24 expert groups that director‑general put together to draft the recommendation. 

The ethics was defined as a set of values that bring us together as humans and related to the defense of human rights and dignity to the question of fairness, to the question of living in harmony. Ben then these values and human rescue was translated into concrete principles of transparency, explainability, on ‑‑ on the rule of law and in very concrete policy chapters. This is a very actionable instrument because it goes into the depth of education, communication, gender, very strong chapter on gender, environment and all to say how do we ensure that the technologies support us as humans in the delivery of solutions for the challenges we face and how much we can also avoid that they become part of the problem instead of part of the solutions? We will be working complementing with our Member States, we will link to the recommendation and a readiness assessment to take into account the very different levels of developments the countries are experiencing. 

Without further ado, I would like to chip in with our speakers and ask them do opening remarks, no more than five minutes each with the very concrete questions on how to increase diversity in the digital world and what measures would you recommend derived from your own experience and where you come from, I'll start with Sumaya Al‑Hajri. You have the. 

>> SUMAYA AL‑HAJRI: It's a great honor, we are in the process of developing the ethics principles and during the benchmarking phase, which is a phase of doing ‑‑ of horizon scanning and what are the best practices, we have analyzed over 50 official documents and reports from different countries and organizations, and they were so diverse, the terminology were different, the concept in most cases were the same, and then came the UNESCO AI ethics principles, that we hope will unify the AI ethics principles across different nations and hopefully intra‑operable ‑‑ intra‑operability and standards and the level of a common language. 

Then to talk about the AI ethics principles resonating to the approach we followed and the development of the UAE privacy mode which was developed recently and announced a couple of weeks ago. The principles promote mainly a multi‑stakeholder approach in developing data protection from work taken into consideration, these international practices and the process of collection and processing of personal data, such as obtaining informed consent. And in the UAE, we know the different perspective that technology regulation is seen as a burden to technology development. 

However, we were very keen on developing innovative and flexible legislation. The process of engagement started a conversation with different stakeholder in the E was a key factor in identify issues, discuss solutions and intervene to balance the individual rights and achieve economic prosperity. This is continuous as well. Another prince Pell that is the promotion and the facilitation, the views of quality datasets, that's for training and development of datasets. It balances the full grant of ownership with the exception of certain ways. This is in paragraph 73. Those circumstances should be such as the enforcement of an existing contract for the protection of public interests or the interests of the data subject. 

Therefore, so glad to see such flexibility as well as the text. On the topic of bias and AI, it is a topic of great importance. AI is unintentional. This is where the focus is. It's a policy program and even big tech companies that has all the necessary resources and knowledge fail to avoid bias in relatively simple data specification cases. Therefore, in the UAE, AI, that was issued to two years back, provided guidance that AI has to be validated by human, especially those with high risk, also we are looking into principle of system maintainability, which has been mentioned in AI ethics principle made regularly the maintainability is a regular process to avoid a snowball effect of bias in unsupervised machine learning and AI technology. I hope this answers your question. 

>> GABRIELA RAMOS: This answered my question, and make us very proud because the fact is, as one of the members that was very strongly supporting the recommendation, the UAE is really to the ground of how to make it count and all of the elements of what you mention is what the recommendation applies, the whole AI life cycle starting with the research and development and implementation and the ‑‑ but at the end, it's human determination and the fact is we need to be aware of these limits to advance more diversity. Let me give the floor now to Alice Xiang that can bring the perspective of private sector on how to advance diversity, what does Sony group does to do that, maybe you are part of that drive and can share the insights. 

>> ALICE XIANG: Yes. Thank you so much, Gabriela, and thank you for the invitation to be on today's panel. It's truly an honor and pleasure to join the rest of the panelists today. So Sony was actually one of the first companies in the Asia Pacific region to come out with AI ethics guidelines in 2018. Diversity is actually one of the key values among our AI ethics guidelines. So for my opening remarks, I wanted to speak a little bit in terms of why diversity in AI is so important in particular. And so when people think about AI, oftentimes they think of AI as simply being code, maybe data as well. 

But what makes AI unique and different from any technologies is AI learns from examples and is based on the objectives that are set out by its developers. So in order to develop AI that works well for all people in a global market, it is thus very important to think about diversity at the heart of AI development. So the data that AI learns from, as vignettes of the real world should be diverse. If a computer vision model, for example, is only trained on faces of Caucasian individuals, it will likely struggle to accurately identify and recognize individuals from non‑Caucasian groups. In cases of mis‑recognition can lead to substantial harms given the proliferation of AI into higher stakes domains such as health care, education, finance and employment. In addition, when it comes to setting these objectives for AI, it's important to consider diversity and fairness. 

How can we ensure that the model is optimized to work not only for those who are in the majority of the training dataset, but for all relevant individuals? And in addition to how diversity can affect the performance of the model itself, it's also to ‑‑ important to consider the fact that AI products are increasingly global in their reach. And so in order to address AI regulation and ethics on a global level, we need to consider how values and cultural context can differ by country. AI ethics, of course, is she intersection of AI and society. So without an understanding of the societal context where the AI is being developed or deployed, it's extremely difficult to adequately address relevant societal harms. So for example, law enforcement use of AI I extremely controversial in the United States because of its history of bias to policing practices. 

So AI developers who study it, tend to be quite well acquainted to the failure modes of trying to develop AI for the U.S. law enforcement context. Of course, every country, though, has its own unique societal inequities and own failure modes that can be exacerbated by AI. And when we see cases like misinformation and polarization in Myanmar contributing to attacks on minority groups, this reflects the way AI can intersect with unique societal tensions in specific regions and create very harmful results. Counter acting such harmful results requires a greater awareness and understanding of the context in which AI is deployed. Finally, I wanted to talk about going from diversity in terms of the countries where AI is being developed and deployed into diversity at the level of companies that are developing AI. 

Here it's very important to consider how company culture can foster or hamper diverse AI teams. AI is a very male dominated field, and it's a field primarily comprised of individuals from Europe or Asia in terms of ancestry, it's very important for companies to think about how to include a wider variety of perspective in their AI development. Oftentimes companies blame diversity issues on pipeline issues in terms of number of available candidates with relevant backgrounds, but the reality is that such pipeline usuals fail to account for higher rates of attrition among women and minorities in tech positions. 

Such attrition is due to the fact that oftentimes, AI team cultures are not necessarily inclusive and can be toxic to both women and minorities. So when we are thinking about addressing AI diversity, it's very important for us both to take a macro and a micro view. AI is increasingly important for the global economy and having wide ranging societal impacts. So in order to build a future of more just and equitable AI, it's important to have diversity at the heart of our AI development. 

>> GABRIELA RAMOS: Thank you so much. It's telling, Alice you are speaking on behalf of Sony, but you feel that Sony is representative of this drive to ensure diversity. 

>> ALICE XIANG: Yes. As I mentioned, that's basically one of our very key tenets and pushes in our AI ethics principles and in terms of the work we are doing. So I think that's critical for every single company entering the space to think carefully about the role that diversity plays as that you are building out their AI teams and the technologies they are developing. 

>> GABRIELA RAMOS: Sony is super strong in the development of AI, one of the major contributors in the world. Constanza, what's your take, coming from the other side of the world and actually in this multi‑stakeholder approach we are promoting, representing, can we saw, the boys of the secret society. 

>> CONSTANZA GOMEZ-MONT: A perspective from civil society indeed from Brazil. Also with the perspective of the global in various ways. The digital world has a profound and long-standing diversity and inclusion challenge. When we talk about, for example, only 8 percent of women are CEOs of top 500 countries around the world or the data you showed, of 22 percent of women participating in the AI sector, we are not only talking about superficial facts, we are talking about root causes of various important issues for fairness and fundamental rights we touch. For the diversity gap, we have to address diversity in the workplace for sure as the other speakers have said. 

Ensuring both the appropriate hiring processes and retention processes as well. Some of the foundations to address the diversity of talent have been strengthened during the pandemic, we have seen companies have gone digitally and starting to hire people from different geographical talent. People that live all over, not only in the tech clusters can access this work force. This sometimes becomes an important factor for many underrepresented communities that have local networks and social support systems in their hometowns. We have to go beyond addressing diversity in the workplace and expect this is the checklist we have diversity, we accomplish inclusion and equal opportunities and actually to this end, you believe that all of the speakers here today, we recognize diversity and inclusion aren't the same thing. 

Diversity is about representation and indeed, it is a very supportive step. And inclusion about respecting and giving value to the diverse groups, which is equally important. For this enabling education programs for managers and people from all over, implicit bias not only understanding cultural differences but celebrating them as part of the DNA of institution becomes key when we talk about diversity and inclusion. And to talk about diversity and inclusion of the digital world, leading to more opportunities and social equality, we must also broaden the discussions and the narratives with a broader lens to look about the barriers beyond the work force. Actually pushing for more and better ways to enable more prepared talent from underrepresented groups, the need for greater representation for programs tailored for communities with a strong focus on the youth sector, for example. And tech skims have to be complimented, of course, with other skills for negotiation. 

When we see not only the tech skills are what ‑‑ it makes or breaks the participation of underrepresented communities in the tech sector but core skills such as negotiation, for example that has led to huge impacts in, for example, the salary you have when training at a company. Moreover, it is not only about addressing how to have more diversity and inclusion in the development of the digital tools, through the tech sector, but also the need for more diversity and greater inclusion of the accessibility of these tools. And actually, when we talk about accessibility and Democratizing access for many communities all over the world to be able to participate in the digital economy and be able to benefit from these tools, makes a huge difference when we are talking about a broader social justice and social equity. 

For example, I'm right now in Brazil. When I learned about a case study in one of the poorest neighborhoods in Brazil, where an organization called recode taught a lot of students how to use VR video to narrate their studies of their communities, and with these skill sets, they made a video about one of the most remote areas and communities of Brazil. They showcased broadly how they did not have access to roads. Using these digital tools, these students from a far community in Brazil, got the attention of the government and everyone, enabling greater conversation on what to do with this, story short, this led to the creation of more roads for this community, which then led to the access of agricultural and selling of products in the market. It is not only the diversity and development of the tools, but also a key diversity in the access of underrepresented communities to these tools for a social broader aspect. With this, we can talk more about language. 

I believe there's only 350 of some 6,000 languages in the world that are represented online. So when we are talking about inclusivity and the culture representation of this digital economy, we must embrace the fact that this is a very diverse world, and we must have better and better tools to address this challenge. There are tons of examples we can talk about, the specific language barrier, some of them proudly from Mexico, the country I'm from, indigenous groups are helping communities, our youth are helping indigenous groups be able to translate their language into diverse, different applications. So in general, when I see this topic about diversity and inclusion, I like to think about it not only are we are we going to promote it in the development of the tools that takes us to the tech sector and diversity and gender participation and equality and representation in the world, but also in a broader lens, how can these tools help create broader inclusion in all aspects of society. 

>> GABRIELA RAMOS: I think that's the right framing, and this is something also Alice was presenting with the micro and macro‑‑‑ mentioning with the micro and macro, looking at the technological to how diverse are the algorithms or bias or diverse is the datasets and how much we can assure you have more representativeness. The other point is ‑‑ I think that's probably more important, is how do we ensure the digital transformation contributes to closing access ‑‑ in the real world, and this is exactly what's main message ‑‑ what UNESCO's main message is, ensuring that the technology contributes to the broader goals. 

I want to talk on one of the usuals you were talking about 350 languages, and we have thousands, no, and as you said at the beginning, we have still half of the world not connected to internet. So it's really quite an effort we will need to do to ensure that the richness of the world, the cultural richness, the geographic richness, the language richness is translated into these marvelous technologies. So maybe we can ask Sumaya, coming from the Arab world, we know how rich has been the Arabic in terms of culture and civilization and in terms of language, but we know that is not commensurate, the role and the prominence of the Arab language in the development in the AI world, and there is many instances in which we have seen that the developments are not as good as working with the Arabic than they are working with some other languages. Could you help us, this is something that I'm sure you have been thinking about here at UNESCO, we also work in the promotion of languages, and also dialects, promotion of culture in all its intense immensity. So Sumaya. 

>> SUMAYA AL‑HAJRI: It's stated in the UAE strategy, to build a wider knowledge production in the UAE. He is a member of the AI expert group, he has recently published a very interesting article addressing the challenges in developing AI. The general challenges for those who are not familiar with the Arabic language, there is only one classical Arabic language, which is often used in formal writings and occasionally in public speaking, but also there are 22 Arab countries in the Middle East speaking different dialects and in some of these countries, there are more than one dialect or accent. Sometimes different vocabulary even. Mainly that is the main challenge, despite there exists more than 450 million Arab speakers around the globe, the AI system was not able to copy to all the differences within the Arabic language. 

Speaking of the other general issue in the Arabic language, that makes it hard for the AI, the exclusion of most vowels in the words. Another one is the Arabic grammar is more complex than any other language as far as I know, and in addition to the morphological richness in the Arabic language. The similarity of words used, you see the exact same words in different sentences, but the meaning differs based on the context. The Diacritic signs has a role in changing meaning. So it's easy for an Arab to identify the words without diacritic signs, you can't imagine how difficult it is for an AI system. 

Finally, all of that contributed to the lack of the Arabic content. That's mainly connectivity could be one of the reasons, as you mentioned, Gabriella, and the data for training the AI system as a consequence, for the different accents and dialects, it's even harder, most of the content in Arabic language, it's mainly the formal and not the different accent and dialect. That really limits the research community in this field. 

>> GABRIELA RAMOS: What ‑‑ you're also ‑‑ I know the UAE is making a lot of investments to really incentivize more developments and more representation and more technologies that could be developed by the region. Do you have plans for that? Is there something that you would like to share with our audience on how the UAE is contributing to the diversity in human language, you're talking about Arabic and you're telling us there's not a single one, there are many and we need to capture these nuances. 

>> SUMAYA AL‑HAJRI: It's mainly a collaborative approach. This is what the UAE is doing currently. We are starting with the academic institute in the UAE, we are supporting any research taking place in this field as well. The AI expert group has a number of researchers conducting research, we are communicating with them to see ‑‑ supporting them in case they would like to have any sort of support to reach out, probably data gathering, and this is what we have been doing. It all started there. The other initiative we have been working hard, is the attraction of talents and mainly talent in the field of coding and that's part of the national coding program. So the more talent you attract, probably will have more talent, more diverse the ecosystem would be, in this case hopefully this will solve part of the problem as well. 

>> GABRIELA RAMOS: Very interesting, we will follow up with you, because this could be quite a contribution to the work, we need to do to implement the UNESCO recommendation. Let me then turn to Alice, because in this very clear stakeholder approach, major sources of that innovation comes from the right sector. We have been in a world where some of the major developers or countries that have the leadership in these technologies prefer a light touch regulation to move the market forward and actually focusing mostly on commercial gain and to maximize economic performance of the business sector. 

The recommendation that we have just approved Alice, in the UNESCO with the support of ‑‑ also of the Korean government is calling for not more or less, but more effective regulatory frameworks because we believe the downsides, we are confronted with these technologies in this almost no free experimental world is causing a lot of damage. How would you see that from the private sector, how would you engage with this new wave of ‑‑ because the tide is turning and how do we ensure businesses are also coming forward to join us to become more concerned about inclusiveness and diversity in the world where at the same time you need to have good numbers for the business? So the floor is yours. 

>> ALICE XIANG: Thank you so much, Gabriella. That's a really important question. From my question, I think it's quite important to have collaboration between practitioners, policy‑makers, civil society in the space in order to determine the specifics of AI regulation, I think increasingly, companies do have their internal apparatus as well to try to address these harms. For example, my role as the Head of AI ethics for Sony, my teams are responsible, in part, for conducting AI ethics assessments internally and trying to get ahead of usuals before they become a problem by identifying potential harms at the stage of even planning for AI technologies. So before a single line of code has been written, we conduct assessments to try to ensure that types of products that people are proposing are in line with our AI ethics principles. 

And I would say that even though companies are sort of doing this one by one, of course, regulation is a very important aspect of this picture as well, to create more conformity across the board. That said, you think the challenge of regulation in the AI regulation space, is that there are not necessarily yet clearly defined best practices across industry that can simply become the basis for regulation. So this is both a very broad space and also a very into space. So by and large, companies have very diverse practices at the moment and not necessarily a clear sense of what is the best practice. Indeed, even when we get to very basic questions in this space, like how do you define fairness or unfairness in the context of AI, we see extensive debate on these fundamental questions. 

In addition to that, I think it's also important to recognize that the nature of ethics in general and in particular AI ethics, there are no 100 percent correct answers and indeed, sometimes ethical principles or guidelines are in intention with each other. For example, should your goal by from a fairness perspective for your AI product to work well for as many people as possible in the place of deployment or for your AI product to work equally well for all people regardless of demographics. In this case you might think we want both of these things. In practice, it can be quite difficult from a technical perspective to optimize for both. And in another example, transparency and security are sometimes intentional. 

The more you reveal about your data, how your model was developed, how it arrived at specific decisions, the more possibility there is of security breaches from hackers who now have a better understanding of your system and its potential weaknesses. And so when we think about policies or regulations in this space, it's very important to try to strike the right balance where ideally you're incentivizing all these relevant principles, but in practice, it can be quite a tricky line‑drawing exercise, especially in these areas on the margins where there can be some tension between multiple, ethical desiderata. 

Going to the point of how do we think about policy and regulation on a global level. I think this is a really important question given the global nature of AI. In fact, this is challenge we are seeing already in the data privacy space, not even thinking about AI regulation in particular. But with data privacy, every country has its own loss, sometimes these losses can have quite different definitions, quite different rights in place, and this can create a lot of tensions when you are trying to build AI products for a global market. 

So for example, let's just start with the very basic case where you are trying to collect a diverse dataset because you're concerned about bias in your models. And so in this case, you need to consult privacy lawyers from around the world in order to ensure that your data collection practices are actually incompliance. It's not as simple as simply saying well, as long as you comply with privacy laws in a few regions that have particularly stringent laws, you'll be okay across the globe. In practice, there are a lot of very small nuances that can make that quite difficult. 

Even something as basic as wanting to ensure that people are represented in your data often comes with all of these additional challenges just from a global conformity perspective. So from that standpoint, I think as we think about developing further regulations in the AI space, it is quite important to have this sort of international discourse because to the extent regulations have some conformity across different regions, that can be very helpful for ensuring that regions are included in AI development and that AI products can have more of a global reach. 

>> GABRIELA RAMOS: Let me ask you, because, yes, you're making very good points in the sense that for somebody outside the industry, it might be like, well, but it's very simple, you need to be inclusive or you need to be transparent and you need to share and be explainable and all of these principles that we have in the AI world, but then you have this sort of downsides in terms of the vulnerability or risings or even the fact that you have some business operations that might be competing with some other outcomes, the fact is that globally what I see is that there is growing concern of the lack of accountability in some places, not all, but in ‑‑ some developments have caused harm. 

For example, the recommendation of UNESCO calls for explainability, of course, but we are also recognizing that there are multiple objectives and that this is not a straightforward issue, but the fact is, I feel that the governments are increasing their stance to be more directive in terms of regulation, and we are seeing that's happening in the EU with the directives being negotiated. The U.S. has several cases in the procurement and the prosecution at least and they have launched this bill of rights. Where is Asia, and I know Asia is big, where is Korea, where are the main players there because the fact is, if you go for national regulations, then for the companies, this is very heavy. Complains becomes very heavy, and we prefer to have common rules in general and have operations not so complex, no. 

>> ALICE XIANG: Yeah, certainly. I would say so from the Sony perspective, even though we are based ‑‑ headquartered in Asia, we take a very global perspective on this, especially given many of the regulatory movements in the U.S. and EU. For example, we have issued comments on the proposed EU regulations. I would say that for many companies, the general perspective towards these regulatory movements is not so much people don't want any regulation. In fact, you think many companies would be quite in favor of having clear regulation that provides clear rules in terms of what is acceptable versus unacceptable use cases, what is needed for compliance. 

From that perspective, I don't think it's necessarily always a tension between industry and folks who are pushing for more regulation. That said, it is very difficult to get to the level of detail such that implementation is more straight forward, and I would say typically, from an industry perspective, a lot of the push is for more clarity, because we can all agree upon very broad principles, fairness, explainability, privacy, security, trust, safety, so on and so forth. It's quite challenging when you get to the implementation stage. I think what would be the worst case probably is if we have regulations that are quite strict but don't provide much clarity and so folks are very much operating in the dark in terms of what is appropriate versus what is inappropriate. 

That is especially challenging if you are, for example, a smaller company that doesn't have as much resourcing around compliance. Of course, that's not the case for Sony, we are quite involved with many of these policy discussions, but it's something to consider in terms of the broader ecosystem how to ensure it's not only favoring larger companies that can invest in this area but provides clear enough companies for people in this space. 

>> GABRIELA RAMOS: We will be calling on you, we'll have a multi‑stakeholder approach and the construction of the recommendation. But then let me turn to Constanza, we have talked language, we have talked business positioning and at the end what I'm getting from the panel, there is this consciousness about the need to deliver for good. The need to enhance the contributions of the technologies for positive outcomes and then to control the downsides, but Constanza, you and I worked together to see how much the gender lenses were really introduced as a particular element in the recommendation of UNESCO. 

And I would say that in this technological context, all the gaps we see for gender around the world, in lob, in representation in decision‑making, in all the usuals and incentives for women to be in certain disciplines and not in others are compounded, and they are scary because the fact is that these technologies are not just helping us advance certain areas of our economy, that you are building another economy, and therefore we are building another economy, not having women well represented is a risk. So could you share with us very concrete elements of how you think this aspect of inclusion can be better tackled. 

>> CONSTANZA GOMEZ-MONT: Definitely. First, what I love about the recommendation and the instrument we built together with the input of thousands of people around the world is that it was normally how it was treated in other documents or ethical recommendations. Gender and, you know, participation was under bias and discrimination. What I love about this instrument, it is that it has a special aspect to it and it has a section that talks about gender not only in the code and not only in the biases language, but also in the greater and more profound root causes that we have with gender disparities in the sector. 

It is not easy to say that only 14 percent of researchers participate in the sector, but also we have to question what are the root causes that are impeding gender equality in general. For example, when we are talking about AI systems and the gaps, we are not only talking about certain aspects, but all human. For example, we are talking about biases in the hiring of the work force that translates into economic opportunities for women, but also we are seeing if you cannot have access to credit, for example. Talking about broader and what I love about the recommendation, that goes beyond and touches other points, how can we talk about effective women participation if we are not talking about the culture of institutions. 

And the sexism, for example, what policies are in place for practical institutions to be able to retain talent and retain women in the work force. How can we talk about equal access to opportunities for women in the work force, especially in the digital economy if we are not addressing the fact that we need maternity and paternity leave in institutions? How can we talk about if we are not addressing the fact there's a social network that has to be set in place for women and families to be able to leave their babies in a secure place to work? When we talk about this aspect, one is how can we have a specific focus, an action plan that has specific actions for gender and not gender being embedded, in other aspects of action plan, such as biases, in the AI life cycle. It has to have a special focus in action plans, that's one of the recommendations this ethical instrument sets. 

Strengthening and highlights the tact that we need more education tailored program for women, more economic incentives tailored for women, and especially, and I'll end with that, because it's a very passionate topic we are working on with gender, we cannot have a discourse that is, oh, gender, as if it was one group we are talking about, the diverse areas of gender, women with physical incapacities, we have to talk about women that are minorities, sexual preferences, we have to broaden the aspect when we talk about gender equality and female participation to talk about the various lenses there is, because a narrow view to gender, with all lead to narrow actions, and this recommendation, one of the underlying facts it highlights, gender is not one thing, it's a multiple perspective that need to be included in action plans. 

>> GABRIELA RAMOS: I'm always thinking because I have been part of my career promoting inclusive growth and many of the policies and rules and incentives you use for the world to increase female participation in the labor force or increase female presence in the boards, it all comes to the same, to level the playing field for women to do it. As you said, then you need to go to the very specific issues to ensure that that's the case. We really are looking forward, to work with you Constanza on this question. Although it's ‑‑ if the developers would only realize that 85 percent of developments in the AI world are done by male only teams. Sometimes it just takes to recognize that in your team. 

It's as simple as that, not talking about regulation or top‑down approaches, just talking about getting into the mind set of people that are in this business the fact that they need to have diversity at the table and at the teams and assure this is the cause. I want to tell you that we had some questions here, and I want to move to the public because this has been a fascinating consideration and I could spend three more hours with you, my dear friends, because each one of you are putting things at the table that brings super other issues. For there some questions that I want to raise with you because they are linked to the issues we are discussing. 

I at the end I would like all of you to tell me, some of you know, I know the head of the buoy ethics question. I would like to ask you at the end, the three of you the question of civil society, how do we ensure we engaging also engage the voices of civil society on these issues, but then there was this question by Celine DuPont, who is asking us, do you think when you talk about diversity, is it that you need to have a general purpose kind of regulation, or do you think that you would need to have something on finance and health, very fast the three of you and then move to the civil society. Do we need to go into finances different from health, from education, from in terms of diversity? 

>> SUMAYA AL‑HAJRI: That's a great question. The preferred approach to technology policy is a cross‑cut policy, this is something I learned during my experience in the technology sector. It is similar Todd, AI ethics, principles approach. You set very general principles so others can adopt it in a way that soothes their either nation, legislation system or sectors. I would go for a general statement of policy that sets the diversity principles and then adopted in different sectors. That's my preferred approach, I would say. 

>> GABRIELA RAMOS: Thank you, what about you, Alice. 

>> ALICE XIANG: I would say in terms of diversity of teams or things like that, probably health care, finance is not necessarily a unique area to regulate. When we talk about diversity in terms of the needs for fairness checks or considerations about the training data, I think there are potentially arguments there you might want representation where there might be concerns about bias. 

>> GABRIELA RAMOS: You, Constanza. 

>> CONSTANZA GOMEZ-MONT: Plus one on the transversal side and policy perspective of this being a foundational aspect for any type of industry or any application. However, maybe emphasizing that on the diversity in the broad world of diversity, having tailored action plans for example, gender, minor tease, going deeper into what to do, what does access mean for the migrant population. What does access mean for people with physical disabilities. I believe where we can go vertical how do we bring different groups and populations and different cultures, and on the transversal you'd, yes, this diversity that implies every aspect to it. 

>> GABRIELA RAMOS: I would actually ‑‑ that was a very good question because the fact is it's very important to have the general framework, but when you think about, for example, health, no? The health data that we are generating and health applications AI is developing that are just mind boggling in terms of the promise of bringing solutions to many of our populations, but we have been confronted the fact that in the health sector, some of the solutions are really male designed. Sometimes the technologies are not automobile to provide the very same outcomes when it is related to women or, again, they perform less well. 

It's quite an interesting question. Let me then get you into probably Constanza, we will do the other way around, because it's not easy ‑‑ because you are also in an organization that is bringing the voices of civil society in many ways doing very good research, but also very good advocacy. It's easy to say bring the civil society, we did it, we could not have done it without regional consultations, but for that, we spent two years in reaching out in bringing representative people and getting to understand. Companies need to deliver and governments need to deliver and therefore how do you ensure that you bring these voices and get insights and contributions they can make. 

>> CONSTANZA GOMEZ-MONT: There have been a lot of one open minded, we are definitely seeing a shift in mind set. Where trans transparency is a base of trust. Not only seeing it in very specific applications, for example, algorithmic and explainability, but seeing it more as a culture change, we are seeing an explicit added value to have trusted ecosystems to move forward to the effectiveness and sustainability. Certain policies within an institution, but broadly speaking, we are talking about governance frameworks, in Mexico, we are doing a lot of prototypes, policy prototypes, how can we regulate a field moving so fast and that has some tensions all over. 

Having these new forms of experimental governance types when we are working a such society taking that facilitator role, bringing together governance, regulators, companies and startups, for example and other experts in the civil society to tackle questions together, for example in Mexico, what does it mean to regulate transparent and explainable AI. These questions are quite complex, understanding the multidisciplinary and multisector teams. It has effectiveness and appropriation and when we are talking about how to transform principles into action, there is no way civil society can ‑‑ it has to have a seat at the table, if not, sustainability and profoundness of action is not possible. If for us, it has ‑‑ For us, it is two factors, transparency in the processes and rethinking the power structures behind these processes. We know there's a power component to these, making a very transparent and enable exploring voices throughout the entire life cycle have been key to our efforts. 

>> GABRIELA RAMOS: We just learn from each other, that's so enriching every step of the way Alice, over to you, and then Sumaya for the last word on this panel but not our conversation that will continue. 

>> ALICE XIANG: Yes. I think it's incredibly important as we move forward to think very carefully about how we can engage with civil society and various stakeholder groups in the response. I think what's very important there is to try also to avoid the risk of tokenization where certain folks are included in the conversation in order to sort of tick a box and say, yes, we consulted with the affected communities or vulnerable groups and so now we have gotten that stamp of approval. Often what is very challenging engaging with stakeholder groups around AI, to some extent all of us are relevant stakeholders. It involves so many different communities. At the end of the day, any subset you bring to the table are only going to be one very small portion of the overall group. 

So I think it's quite important for folks when they are engaging on the civil society side to acknowledge that and recognize, okay, even if we say we are engaging with stakeholders and engaging with civil society, we are really more specifically engaging with these particular slices and pragmatically, it will also be that case, but that means we need to also acknowledge and think about the blind spots that still persist, and as Constanza was talking about, thinking about process and documenting that process is very important to ensure these blind spots do get documented and down the line, if there are problems, those are revisited. From that perspective, I think I would primarily caution as important as it is to try to make sure every stakeholder group is represented, always remember you're never actually going to be able to get everyone's perspective, and thinking that you have can make it seem that your process was more robust than it actually was. 

>> GABRIELA RAMOS: That is true, bus just having in mind, this is important, and we can be in our corners. Thinking we can do without reaching out. It is also risky. We know it has become very risky. Sumaya your final thoughts. 

>> SUMAYA AL‑HAJRI: As a civil society, to the civil society would be just ensure the inclusiveness and the diversity, two different approaches, one is ‑‑ so just to focus on the awareness for the leaders. Do they portray the inclusiveness in the operation? The other way is to honor the diversity of the team. I saw that one ‑‑ related to the AI procurement in government and one of the principles they have stated is to have a diversity, diversity in terms of gender, the background, the knowledge and this would ensure the inclusiveness as well. After the development phase of the system. And then another way is something that we are looking into right now in the UAE, the awarding approach. We define some criteria of excellence, how do you define implementation in an AI system and award those who were really in compliance with the guidelines and principles which are not yet enforceable. There is the enforcement way, and that's fighting discriminatory behaviors, and this is something we are enforcing in order to ensure the diversity as a consequence, the inclusion as well. 

>> GABRIELA RAMOS: Well, you think that what Sumaya is reminding us, the governments have duty of care and have the tools to ensure that the business model that these technologies deploy will be aligned to the outcomes we all want. Diversity might not be an outcome in itself, although for UNESCO it is, but it is most effective tool that we have to ensure that the outcomes of whatever ‑‑ it could be the technological developments and digital developments that we are talking today, but also it can be just in any school or government, dealing with things not related to the technologies, are always a source of controls, or risks not to be taken. I feel that this conversation has been enlightened. 

I could not ask you any more of the questions that I have in the chat. I have one more from Mr. Paen asking about peace, the whole thing that we are discussing now, how to keep peaceful societies and how to control the downsides. I have to say that when we started working for the recommendations on the recommendations on AI, there were these looming dangers that we know, in terms of the misuse or abuse or the lack of accountability or the lack of transparency that we were worrying many people about the lack of representatives, all the things we have been discussing today. What you have shown us, this very powerful group of ladies, is that we have the means, we have the will, and we will do it. 

I think that what we take from the conversation with you from UNESCO is that we have champions all over the world, in this multi‑stakeholder approach, because our audience will notice that we actually bring the government, we brought the business, and we brought the civil society perspective to will really build a common narrative. For me, it has been a pleasure, and having been approved with the recommendation on the ethic of AI of UNESCO just this month, I think that we have a lot to build up from these conversations and learning from individuals like you. So thank you so much. We come to the end of the panel. And really, it has been a great, great source of knowledge and inspiration and we will continue the conversation. Thank you so much.