IGF 2023 - Day 4 - WS #313 Generative AI systems facing UNESCO AI Ethics Recommendation - RAW

The following are the outputs of the captioning taken during an IGF intervention. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid, but should not be treated as an authoritative record.

***

 

>> YVES POULLET:  Hello, Siva.  Can you hear us?  Can you test, unmute your mic and test?

    Siva?

    Can you hear us, Siva?

    Siva, can you hear us?  No?

    Dawit, can you hear as well?

    >> DAWIT BEKELE:  Yes, I can hear you.

    >> MODERATOR:  Thank you, can you test your Mike?

    >> SIVA PRASAD RAMBHATIA:  Slightly low, your voice is slightly low.

    >> Siva, do you see us?

    >> SIVA PRASAD RAMBHATIA:  Yes.

    >> Hello, Steven.  Thank you for being there.  I know you are very tired and Dawit?  Okay.

    (Pause.)

    >> Okay, thanks a lot for everybody who is present in this very big room.

    Thanks also for the remote audience.  It is quite clear you have the floor too during the question and answer and we hope that you will intervene.

    So perhaps just to start, as you know, UNESCO has taken a certain number of initiatives.  We must underline the importance of this initiative as regards AI ethics regulation.  Now, you know that they have published in November twroot a recommendation on the AI ethics and more recently perhaps you have seen that they have published a report about the application of the recommendation to ChatGPT.  And that is why it is an honour for us to host Gabriela Ramos, she is very well-known.  She is the Director of the UNESCO Department which is in charge of the AI ethics recommendation implementation.

    So Gabriela Ramos was unable to join us because the time difference between Paris and Kyoto.  But she has sent yesterday a video in order to be present with us.  So perhaps you might launch the video.

    >> GABRIELA RAMOS:  The work on artificial intelligence is a recommendation on the ethics of artificial intelligence that was adopted by 193 countries back in 2021.

    The recommendation determines that artificial intelligence technologies need to be well aligned with human rights, human dignity,.

    (This video is captioned.  Realtime captioning will resume when the video condition cliewdz.)

    We do not stop there because all this framework then is translated into very concrete policy recommendations.

    We have 11 policy chapters that go into the gender issues, data issues and environmental issues and many more.

    And those policy areas instruct meats, for example, I'm going to give you an example to development data governance strategies that assure the continued evaluation of the quality and training of data, promote open data and data trust, and call members to invest in the meats on gold standards that are set and ensure when there is compensation is begin related to this information.

    And the recent release of foundational models, AI models have been there with ChatGPT gained 1 million users within the first month.  We have seen excited -- it is impressive what they can do and what they can offer in terms of the service to the world.

    These models have also spurred concerns about ethical, social, political and legal implication and highlighted the urgent need for robust and effective governance systems and regulation.

    We have conducted our own analysis of Generative AI models through the lens of the froiks and found a range of concerns related to fairness and nondiscrimination, re-lienlt, misinformation, privacy, data protection, the labour market and many, many more with this ac seem rated pace of issues that we have already identified before.

    The systems replicate but also massively scale up many of the same ethical and strategies of previous generations of AI systems.  For example, we have known about the potential of gender and racial biases in AI systems for many years now and we see the same kind of stereo types being massively reproduced in the latest systems.  For example, narratives by ChatGPT reinforced the stereotypes, the characters as less powerful and defining them by physical appearance and family roles.

    Just as weak researcher at John Hopkins found that it was impossible for mid journey, a common any used AI generation tool to produce a picture of black children treating white use.  Whatever the prompt used the system will only produce a picture of a white doctor treating black quhirn.

    But there are new and pressing challenges.  For example, around issues of author ship and intellectual property rights.

    As a platform does not qualify services and lacks transparency on how it works.  Legal actions are currently underway to determine, for example, whether open AI which copyrights by training its model on novels without the permission of the authors.

    And on the other hand to session which whether or not a Generative AI model can itself be copyrighted.

    This is another area where the incredible concentration of economic and now cultural power in the hands of a small group of companies and the small, of course, group of countries need to be addressed in a determined manner to make it more inclusive and more representative of the very diverse world in which we live.

    And then the way in which this current experimental AI tools have been unleashed on the public provides a primary example of why it is imperative for meats to have the to ensure that actors identify, clarify and mitigate some of the risks of harm from such models before rushing them to deploy them in the markets.

    And to address this challenge UNESCO has developed an ethical impact assessment.  This assessment facilitates the prediction of consequences and mitigation of risk of AI systems via a multi-stakeholder engagement before a system is released to the public.

    And allowing those developing a procuring AI systems to avoid harmful outcomes but at least to think about them, to have a tool by which we can understand what the systems can do and what needs to be enhanced and what needs to be corrected.

    And the ethical reflex itself is a vital tool to comprehensively address the questions that everybody has in their minds right now about the risk of AI systems and how we can identify them.

    And we are currently piloting the ethical impact assessment as well as another tool that we were asked to produce in the recommendation when we, this was adopted by our member states, the readiness assessment methodology.  This is to see how much countries are well prepared to deal with the legal and multilateral and other issues related to AI and we are working with 50 governments around the world to deploy this tool.

    The results of this assessment will be made public on the AI ethics UNESCO observatory that we are launching with the Alan tiewrg institute but also with the ITU.  This will be an on line with good practices across the global while creating SW active platforms for people to raise awareness and to understand better, to look at what works and what doesn't.

    And then to translate that into actions on the ground to equip ourselves, the governments, the people, Civil Society, to deal with these technologies better.

    And in this sense I am also glad to share with you we started a path breaking project with the dop digital infrastructure authority that is supported by the European Commission to enhance the Dutch and European competent authorities to supervise AI and this is all considering that European commission is going to be implementing soon their AI acts and the institutions are well equipped to deal with the issue.

    Here again the latch language systems and generative models are broadly high on everyone's agenda.  And that detailed analysis from these projects will form the basis for development of a model governance framework bringing together the elements of an ethical AI ecosystem to help governments in developing robust governance systems aligned with the recommendations.

      We represent this framework at the global Forum on ethics of artificial intelligence that is going to take place in Slovenia in the spring of 2024 and I am looking forward to see you all there to continue learning together and to continue building together the capacities to deal with these technologies.

    Thank you very much.

    (End of video.)

    Yves east thanks, Gabriela for this marvelous introduction.  I think this will help us to fix exactly the scope of our discussion.  As you have seen, there are a lot of challenges raised by the AI jet system.  The first question, perhaps it will be quite interesting to see among the persons present who is already using AI generative system.  Like ChatGPT, like rad me, BERT, like cuckoo, that is the Korean generative system that already used the generative system.

    Raise your hand?

    Oh, I see -- I thought everybody has already used it.  You remember in November 2022 some out man of CPU put into the market for the general public certain ChatGPT services.  Perhaps it is quite interesting to remember that three years before some said that ChatGPT must be reserved for professional users only because it was too dangerous for large public.

    It is modifying, it is business, it is normal.  Perhaps it is quite interesting to recall it.  This initiative was success.  Six minimum users five days after the launching.  Now we see a multiplication of applications supported by what we call foundation models like the Google BERT, ChatGPT,tively, the Korean co-GPT, meta transformer and others.

    That is what is quite interesting is that all these foundation model are general purpose model.  And they are not using for specific purpose.  But it is quite clear that apart from the foundational model, there is a lot of application developed by the same companies or by other companies.

    And now we are using these application.  For instance, my students are using to ChatGPT for preparing their enhanced their memory and it is quite clear that if you feel alone, please find with companion chatbots like replica, try and others which understand you.  Like your best friends.

    If as a company you need to develop a marketing strategy, it is very easy to use Casper as an application for finding the right struggle and you need the right, and the right logo.

   If you were seekers, you want to write, if you want to write a successful letter of motivation, please use generative AI application.

    So generative AI system are more and more used.  I would like to give tentatively the floor to Dawit in order to answer to a certain number of questions.  And my question would be the following.

    First, generative AI system, I mean both from foundation models and generative AI application are derchtively AI systems.  Could you please in a few minutes explain the peculiarities of the system along the other AI systems and definitively link with peculiarities, is it possible to explain why this generative AI system needs specific attention, digs teengts from that afforded to the other distinct from that afforded to the other AI systems and the authorities.

    I have another question.  The application of language model, large language models are diverse, include text compression, text to speech conversion, language translation, chatbots, virtual assistants, speech recognition.

    They are working with big data.  Which ones?  Is there a problem with the language used within this big data?  And last one, -- is this a revolution?  Mr. Bekele, you have the floor.

    >> DAWIT BEKELE:  Thank you very much.  So generative AIs are advanced artificial intelligence systems designed to generate human-like content, including text, images, and even multimedia.  You have probably heard and I'm sure used applications such as chapter GPT that answers to your questions almost as if there is a human being at the other side, at the other end of the line.

    There are also applications that change photos into art work, translate people's speech into another language in realtime for example you have heard probably the news recently that the Secretary General of the UN speaking in a language that he doesn't speak.

    So there are so many applications that generative AI have already shown us.

    These models are built on large scale platforms, the ChatGPT are trained on vast data sets to learn the patterns and structures of human language and other forms of data.

    The key peculiarity of the systems lie in their ability to generate coherent and contextually relevant content on their own.

    Based on input they receive.  Unlike search engines, for example, that we have been using for quite some time now and that provide useful responses, but that are often not in the form that you would expect from a response from a human being.

    Generative AI responses are very much like what you would expect from root human interlock tour.  This has, of course, numerous benefits since the generative AIs can be used almost directly from humans unlike what you get from exchanges that require filtering, formatting and rewriting.

    But it also brings as already has been said by the previous speakers, many challenges that public authorities will have to deal with.

    One significant aspect that requires specific attention is the potential for biases and concerns with the generative content.  These models learn, as it has been said from diverse and sometimes biased data sets reflecting societal prejudices present in the training data.

    Consequently, the output of these models make -- perpetuate or magnify biases such as race biases, raise concerns about fairness and the reinforcement of harmful stereotypes.

    Already the use of AI systems in law enforcement raised so much concerns that some authorities banned the use of AI at least for the time being.

    Another important consideration is the misuse of generative AI for malicious purposes.  Such as the creation of deep fake content that are indistinguishable from real content.  The technology's ability to mimic human like couples poses risk to integrity of information and has implications for issues like misinformation, threat use and online manipulations.

    And aspects that I believe should also be a concern is that it renders many societal tools obsolete.  For example, as a former teacher myself I am concerned by how generative AI affects education.  Learning at least as we understand it today requires personal work from the learner that needs to be further helped by an instruct ter.  Generative AI can now provide an answer to the learner without the and the answer is indistinguishable from what a human would give it is almost impossible for the instructor to know if the student gave the answer or it is generated by generative AI.  This will have a negative impact on learning within schools and youths.

    Generative AI also rendered many jobs obsolete probably more than any technology in the past.  There is almost no industry that has at least a few of its jobs replaced by generative AI.  Generative AI can do the work of computer programmers, content creators, legal assistants, teachers, artists, and so forth.

    This creates, this can create major havoc in societies like we are currently seeing in the movie industry in the U.S. where writers on strike in most part for fear of losing their jobs to AI.

    So public authorities need to pay attention to the systems for several reasons.  First, there is the need for regulatory framework with ethical concerns that has been said by the key speaker and potential misuse of generative AI.  Second, authorities play a crucial rule in ensuring accountability in deployment of these models.  I'm very happy that there are already discussions at UNESCO around this.

    Third, there is a growing need for public policy that raises the impact of generative AI on various sectors, including jobs, privacy, property, and cybersecurity.

    In general, the generative AI -- has an impact on our society and demand specific attention from public authorities to establish ethical guidelines, ensure transparency and look at societal implications of these powerful froanl.  I don't think we can sphop AI's progress but I also believe we should not let it develop without setting any boundaries.

    To your other questions on the language models, there are many language models like GPT 3 and there are indeed applied across various tasks, different applications sup as text completion, text to speech con verks, translation, et cetera.

    These language models, especially lag ones like GPT 3 are made on vast data sets of human language using language use coming from a broad range of text from Internet, books, Articles, and various sources.

    Of course, resources are not representative of the whole world.  And they have biases and so on.

    So there are some concerns.  One significant issue as indicated earlier is biases presented in the training data.  Is the training data contains data in unrepresented samples?  The model can have links to outputs reinforcing societal prejudices.  We are raising many ethical questions.

    There are also concerns about potential misuse of these models for generating deceptive or harmful content.  We have already seen how social media works in our society bill spreading misinformation.  I come from a country that has been highly affected by this misinformation and I'm very afraid of what can happen with AI.

    Many people have difficulty to distinguish between the truth and the fake since they trust what they see in writing.  Generative AI is taking this problem to a new high with deep fake where it is possible to make anyone think anything, blurring even more further the line between the truth and the false.

    This will have an impact on our societies that might be catastrophic if not mitigated in advance.

    So for the future of generative AI, despite many dangers of generative AI, I believe that there are immense opportunities ahead.  I believe that you can expect the development of even more powerful and sophisticated generative models.  Generative AI may be fine-tuned to specific industries and domains and look for more specialized applications such as in healthcare, finance, law and more.

    I also believe that the researchers and public authorities will attempt to address the concerns such as ethical issues and I'm happy to hear that UNESCO has taken this issue very seriously.

    We have already seen almost unprecedented attention from authorities such as the U.S. Congress, EU commission and UNESCO to understand and establish a framework for the development of generative AI.  UNESCO, for example, has do a number of work and developed a number of recommends on ethics, for -- for art fix intelligence adopted by almost the 190 members indicated by the previous speaker.

    My personal hope is that we learn from the cost of our inaction on social media and researchers as well as public authorities will act as fast as the development of AI so that the risks are mitigated and the opportunities outweigh the risks.

    Thank you.

    >> YVES POULLET:  Thanks a lot, Dawit.  It was very clear, your presentation was very nice.  And developed what we have in mind.  It means that generative AI system are multiplying the risk already linked with AI systems.  And you have developed a certain number of these risks and you have appealed to the public regulation, at least to a regulation.  It is quite clear that generative AI application are bringing a lot of benefits for all of us.  Citizens and perhaps societies.

    But at the same time as you have said, as you have underlined, their development are a source of arms, individual arms as regards definitively financial articles as we are deafenly also physical arms.

    I would like just to mention a Belgian case in my country, perhaps a bit depressive, has decided after nice and nice discussion with a companion chatbots to commit suicide.  And I think it is a risk of manipulation we might hear from generative AI system.  Perhaps we have to create a new right, the right to mental integrity.

    There are other risks.  There are risks to privacy as regards intellectual property and if we think about human rights, you have noticed that it is quite clear that we must also speak about the problem of the right to job and right to job is definitively compromised when you see the problem of the translator, when you see the problem of the trans-- of certain social artists.

    Dechtively it is not a question of just individual arms, it is a question of collective arms.  In the second part of our discussion after the Q&A time we will develop the problem of discrimination, discrimination between countries, between regions, between definitely certain communities.

    We will come back on that issue, but Dawit you have also mentioned and it is very important the problem for democracy and especially as regards the problem of multiplication of misinformation and disinformation, especially in the possibility for all people to create deep fakes.

    How to face all this risk.  I come now to the following speaker.  So to face this risk.  It is quite clear that you have already mentioned a certain number of initiatives from UNESCO.  But it is quite clear that we have also to turn our attention to pay attention to what happens in the two leader countries of AI.  I mean China and definitively and definitively the U.S.

    To speak here about, I will ask to Chang Chen from the University in stech and Stefaan verts which is professor at the New York University and the Director of the governance laboratory and editor chief of data and policy to comment.

    And on this point, I have a certain number of questions.  Perhaps you remember that there was a very important open network signed by more than 35,000 people, including very important senior of high-tech company like's lone musk asking for a more tore yum.  Is that a good solution, do you think?  Is it feasible?  They have standard asked to stop the development of generative AI during six months.

    Do you think that is a good solution?

    Another problem is derchtively the question to know -- definitively to know the extent we need a regulatory answer.  On that point, Chang Chen, it is interesting to know a bit more about the Chinese initiative.  China was the first to elaborate on administrative measures, what they call administrative measures.  I would like to know a bit more what does it mean, administrative measures as it relates to generative AI services.

    They have done that and definitively EU has also decided to legislation, not administrative measure but to have a comprehensive legislation about AI and more precisely with the European Parliament, recent Parliament measures about generative AI systems.

    I would like to see what is in China's position is.

    As regards definitively U.S. and they have adopted an order, another approach, U.S. has published this White House Office of science and technology policy in October 2022, a blueprint for AI Bill of Rights.  And this blueprint is very impressive, but it more a sort of co-regulation, discussion and negotiation between public authorities and big tech.

    And the tech sector.  In that blueprint there are a certain number of recommends about how to build up, how to build up AI system and which ethical values we have to follow.

    So Changfeng first an perhaps after that Stefaan, take the floor on these issues.  Changfeng, you have the floor.

    >> CHANGFENG CHEN:  Thanks, professor eaves efforts.  Nice to see you all, friends.

    It is my honour to attend this session.

    Before discussion, the question I would like to mention a concept, a concept about culture, count to lack is a term coined by sociologist Williamal bin to describe the delayed adjustment of nonmaterial culture to changes in material culture in the 1920s.

    It refers to the phenomena where change in material culture such as technology tools come more rapidly than changes in nonmaterial culture such as beliefs, values, norms, including regulation.

    I think culture lag is describing the situation when generative AI appears.  We are excited, meanwhile we are panicked.  The capabilities for this new technologies break through the scope of traditional legal regulations.

    So first, I just say we need a regulation for generative AI.  It is a powerful technology with the potential to be used for good or for harm.  But generative AI is still developing, leaving the scientists and the engineers who created it cannot fully explain and predict its future.  Therefore, we need to regulate it prudently rather than leap it in the cradle through regulation.

    So it is the reason because after I introduced some policies and the regulations I can judge something.  So just think this kind of things first.

    And at the beginning of a new thing we need to be more inclusive and have the wisdom to calmly deal with the mistakes in and the causes that human civilization and self-confidence.

    So the question of a moratorium on generative AI would be a temporary ban on the development and the use of this technology.  This would be a drastic measure.  And it is unlikely to be effective in the long-term.

    Generative AI is a powerful technology with the potential to be used for good.  And it would be unwise to stifle its development entirely.

    And then I think a global regulated model for generative AI would be ideal, but it will take time to develop and implement.  So just talking about in China artificial intelligence including generative AI is developing very rapidly in China.  And it has been widely used.  Generative AI applications from by stance, but tech and other companies are stored on my mobile iPhone, mobile phone and laptop while using GPT beret and B bean at the same time.

    When I choose something, you know, in my life like when I choose restaurants in Beijing or Shanghai for party with my friends, this applications always help me.

    In the field of education, artificial intelligence applications developed by I fly tech already helping teachers update their curriculum, correct the students' homework and provide personalized teaching guidance.

    China has been at the forefront of developing and regulating generative AI in 2022 -- yeah, 2022, China released the than interim administrative measures for generative AI artificial intelligence services.  These measures require providers of generative AI service to source data and foundation models from legitimate resources, respect the intellectual property rights of others, process personal information with appropriate consent or legal basis.

    Establish and implement risk management systems.  And internal control procedures.  Take measures to prevent the misuse of generative AI services such as the creation of harmful content.

    The interim measures to regulate generative AI services are just a start.  China's first artificial intelligence measures are more realistic than the previously released draft for comments.

    On the day the measure was published in the afternoon of July 3rd -- July 13, the share price of the ChatGPT concept started in the Hong Kong market rose.

    Perhaps, yeah.  Some legal experts believe that the current regulatory framework in China cannot effectively address regulatory challenges.  Its main content focuses on regulating providers of AI products of all services.  And it still belongs to the traditional responsibility models for AI governance.

    Generative AI AI will diverse in titles in medical circles such as data honours, computing power suppliers and model designers.  This is for regulators to regulate the heavy responsibility to providers of generative AI.

    That is some results of the, results is from some legal experts who published the Article in China in Chinese.

    It is also unable to deal with sh social -- some social issues.

    And I say that's just the start.  Thank you.

    >> YVES POULLET:  Thanks, Changfeng.  That was a very interesting point you are underlining.  I will take from your intervention a certain number of key words.  You say the famous cultural lag is important.  You called for what you call a prudent regulation.  Not to go too fast and definitively you ask for what you call an inclusive process.  In order to have participation of all stakeholders as regards the content of the administrative measure China has taken, it is quite proximate with what EU regulation is proposing.

    I have seen that you are quite, you pay attention to the intellectual property questions.  You pay attention to the privacy questions.  I was not surprised because it is very important in your regulation.

    Tentatively you propose for solving the risk to have internal risk assessment, risk assessment which must definitively identify the risk, not only the individual risk but societal risk.  And definitively which is proposing a certain number of mitigation of this risk.

    So I am quite comfortable with this approach because this approach is quite proximate of the EU regulation.  And now I turn to Stefaan.  I give the floor to Stefaan because you have at the U.S. taken another option.  It is perhaps quite interesting to see to what extent even if U.S. has taken a core regulation approach, the same principles, ethical principles may be developed and the same procedure might be implemented.

    Stefaan, you have the floor.

    >> STEFAAN VERHULST:  Thanks so much.  I hope you can hear me.S thanks, eaves for having me.  I wish I was there in person myself in this beautiful room you have there which looks like a really adequate place for having a conversation like this.

    And so just to cover the questions you posed, the first question seems to me was really about the moratorium.  I think the discussion from my point of view did open up a broader debate whether A, a moratorium is even feasible or whether we should focus on a responsible technology development as opposed to banning or even having government intervene in how innovation is being facilitated.

   I think it was an interesting conversation, but at the same time I think in addition to this tension between the moratorium and a responsible development approach, the underlying tension was also to what extent should the development of AI and in particular case here the development of large language models and generative AI be open or closed because that was the other big discussion from my point of view which from my point of view was actually more interesting because it really identified the interests behind the moratorium.

    And also the interests that are currently being proposed because on the one hand you have organisations like surprisingly open AI advocating foreclose development.  Quite often with the argument that if you would open up the development of large language model or generative AI AI you would have the potential for abuse.  But then on the other hand you have meta, for instance, which has been advocating for an open approach to the development of generative AI which from my point of view is actually most in sync with how AI has been developed until recently.

    Most of the research as of it relates to artificial intelligence was always open and as a result I would argue has actually been able to make massive advances because it was open and because you had a whole Army of actually developers, researchers working on improving existing models, including GPT models.

    If we start closing it, then on the one hand we actually will create new power asymmetries with those who have the close models versus those who have the open models from my point of view it would actually be an undermining core principle of research in the AId with space which always has been open and by making it open you also would be far better in a position to actually identify the weaknesses, the challenges that might be out there.

    I think that is another layer that needs to be addressed which is not just about regulator not regulate.  It is really about to what extent should you make the toll open so you actually can really examine what are the vulnerabilities.  Of course the argument here is if you make it open others will use it.  It does not from my point of view validate a closed approach, because a closed approach from my point of view will actually solidify the current power asymmetries in the market which from my point of view are equally challenging and important to be addressed other than the potential abuse of the technology itself.

    That relates to the first question, Yves, which is anyway a more kind of, we need a more sophisticated way to have a conversation about a Moderator.  It is about how do we develop technology in a responsible way.  I don't think a ban will automatically make it responsible.  It actually will solidify power positions.

    The second element is really to what extent can we sustain the kind of culture of openness as it relates to artificial intelligence research that has made tremendous strides until today.

    Of course, you asked what is the approach from the U.S. as it relates to AI and specifically as it relates to generative AI.  As always, it is more complicated than just one approach.  I think there are multiple approaches currently being tested out.  From my point of view I would just touch on kind of six approaches that we can see within the U.S. context.  And indeed, if as you widely said many of the approaches might be somewhat or feel like they are different, but many of the principles that have been underpinned those approaches are very much in sync with, for instance, the UNESCO recommendations and also very much in sync with emerging other principles such as the ones that have been advocated within Europe as a result of the AI act as well.

    Also before I delve deeper in, it also suffice and it is perhaps important to state that the U.S. is again a member of UNESCO and that that also provides a new opportunity to actually bring the U.S. within the conversations as it relates to the implementation of the UNESCO recommendations, which as you know was, the U.S. was absent until recently.  I think having the U.S. again being a member provides an opportunity to also perhaps create more approaches that are in sync also at the international level as well.

    Now, the six approaches.  One approach that already was mentioned by Yves is more kind of a rights-based approach, right?  Indeed OSDP has tried to convene kind of a multi-stakeholder approach in order to develop this kind of Bill of Rights which was really an effort to set out a set of principles, a set of rights that need to be enshrined in a voluntary way.  Yves, as you rightly said, this is not about hard regulation.  This is more kind of codesign of certain kind of frameworks that subsequently will need to be implemented in some kind of a self-regulatory, voluntary kind of way.

    But the Bill of Rights was interesting because it did specify a set of principles and a set of areas of concern such as, for instance, the need to really focus on safety and effectiveness of the systems that are being provided, focusing on algorithmic discrimination, focusing on privacy.  And of course as you know the U.S. does not have a national privacy legislation, but I think the Bill of Rights was important to emphasize the need for perhaps a more national cross sec tomorrow approach as it relates to privacy in order to deal with also AI.

    Also issues of notice and explainability, which again is not unique to the U.S., but is coming up everywhere.

    Then, of course, also the need to think about human alternatives as opposed to automated alternatives in actually making decisions.

    These were kind of the areas that the Bill of Rights addressed and subsequently provided the framework for additional commitments.

    I think that is the second big element that what has happened within the U.S., the White House through, for instance, the Bill of Rights but also through other means have been able to engage all the large tech companies in making commitments for responsible development of AI which includes commitments to test their systems to what extent they aral lined with an assessment tool that interestingly was developed in a collective manner during Dev confused 31 which itself was an interesting exercise.

    Here where they tried to tap into the collective intelligence of expertise in order to come up with actually a framework that then subsequently was recommended by the White House to be the framework to assess.  Yves, you intervene?  Yves east just a moment, it will be needed to conclude in one or two minutes.

    >> STEFAAN VERHULST:  Sure.  I know, I know.  That can go on the wall here.

    The other element, and I will briefly emphasize some aspects.  The other element is, of course, that we also have seen the creation of methodologies to assess risk similar to what has happened in Europe.  I think NIST, the national institute of standards and technology developed its risk assessment framework where it really tries to define what is trustworthiness and how do we know whether systems are trustworthy.  I think it is definitely a worthwhile exercise to look into it.

    Then the other element which is always important is not only regulation but quite often the shadow of regulation, given the fact that we are relying on self-regulation.

    What has happened is that Senator Schumer who leads the Senate ultimately has held a set of hearings.  As you know hearings is actually a very valuable tool in actually regulation because it does provide for oversight and it does provide for a discussion.

   Last thing I will say, Yves, and then I will shut up, is that while all this has happened and while a lot of this is actually co-regulation, in most cases self-regulation, what we have seen happening is that the states in the U.S. have actually become far more active than the federal agencies in regulating, which refers again, Yves, to my other area of interest in AI governance, which is of course AI localism.  What we have seen is that states and cities have actually been really active in AI governance in the U.S.  There were about 200 bills at the moment being proposed at the state level.  And multiple cities have start the legislating AI as well.  I think that is also worth noting at the international level that states and cities are actually in the for front of coming up with frame works and legislation.

    I am going to stop here, Yves.

    >> YVES POULLET:  Definitely thanks, thanks, Stefaan.

    I think your proposal to complex fie the discussion notably as we have the question of the open AI is definitely very interesting.  I think we have to connect and join the discussion and also we have time.

    Another question, I think is that you said that you have repeated the same ethical values as the ethical values asserted by China and I think we have a sort of common agreement about the fact that ethical values are fixed by the UNESCO in a very clear way and that we might accept that.

    So I don't think there is really a problem as we, on ethical values.  The problem is more how to enforce these ethical values.

    And you have proposed to pay attention not only to public or self-regulation but you mentioned a certain number of things like standardization, like definitively like quality assessment and I think it is very, very interesting.

    You finished by this marvelous point about a localism regulation.  And I think that is very powerful.  I think we need also the fact that communities, local communities are taking that very seriously and proposing solutions which are totally in accordance with their culture and with the habits of the people.

    Okay.  So now we have a question and answer discussion.  I know that Fabio, thanks for being here, has already certain questions.  Please.

    >> FABIO SENNE:  Thank you.  So we have two questions and comments on line, more or less collected one from 17 years old boy from Bangladesh.  Who sent some very nice contributions.  I woanltd read them all because we are using in the chat, but just to mention regarding the question to the, two, the comment from owe mar, convene a global Forum on dpern to discuss the social implications of these technologies, support research on the dpern and everyone including children and young people and promote digital literacy in critical thinking, skills among children and young people so it can be informed users of generative AI.

    And also Steven Voslok building on owe Marie point, Steven from UNICEF say they are also concerned that there is no -- they don't know yet the impacts of generative AI positive and negative on children's social and emotional cognitive development.  Research is critical but takes time.  How is the best way to navigate the reality, the tools that are out in the public.  We need to protect and empower children today.

    But we only fully know the impacts later.  How to deal with the need for research but at the same time things are out there.

    >> YVES POULLET:  Thanks for this first question.  Perhaps I ask to the different speakers, not only the speakers who have already taken the floor, but also to Siva and perhaps you, Fabio, if they want to answer to this questions.  And definitively I have a look at the audience.  I see the mic right there.  So perhaps if you have other questions, perhaps it will be interesting to raise.

    Now?  There is questions?  No?  There is nobody?

    Okay.  So I come to the 21st questions and it is quite interesting to see that they are questions raised by a young people and they are quite interesting and there is a specific need for being educated in the use of this generative AI system.  It is quite interesting.  I think I had in mind, Stefaan has spoken about the fact that you must have responsible people using generative AI system.  When you think about responsible people it is not only the tech companies which are developing these AI system but also the users.

    So perhaps it might be quite interesting in that line to answer to the questions.

    But are there answer?  Function, Changfeng, Fabio, Stefaan?

    >> STEFAAN VERHULST:  Happy to briefly reflect on that.  I fully agree with Omar, we do need to engage with young people in a more sophisticated way to figure out A what are their preferences and B, what are their solutions?  It is not just about listening to young people.  They might have solutions that are far more informed because of their being digital natives in many countries as well.

    So we just finished actually last week.  We had six huge solutions labs in six regions together with UNICEF and with the Lance et commission focusing on adolescent wellbeing.  One of the questions that we posed to them was actually about data and artificial intelligence.  The answers were sophisticated and showed that young people have a sense on what is happening and what their preferences are as it relates to AI as well platform so we need a lot more of those conversations, especially in countries like in low and middle income countries where the majority are actually young people.  So we need to actually engage the majority in order to really become more legitimate on how to go about AI as well.

    I fully embrace that and I think young people's are more innovative, with the youth --

    Internet connection poor.)

    >> STEFAAN VERHULST:  How we have done conversations for the last F years.  They have moved on and we are having conversations in different platforms where we, I talk about myself shall kind of the aging population are not used to have those conversations.  We need to really innovate in that way as well.

    Yves east thanks, Stefaan.  I think Changfeng has something to say.

    >> CHANGFENG CHEN:  Yes.  I think generative AI is conducted, conducive to educating young people.  And it creates a view of rights.  In fact, there is a theory of rights for children in media literacy.  Young people have the right to use new technologies, to learn and to develop themselves.

    And the dulls and professionals should have the obligation to guide the young people and yeah, it is a long process to young people to get this rights.

    But I think the efforts has start.  UNESCO has media and information literacy week in this end of the month.  In the last week of this month in Jordan.  How you say, Jordan?  Yes.

    Many people is worried about the young people who are in this kind of situation.  And I think we should give young people the right.

    Also for the technology company, they should create some special helps for the young people.

    >> YVES POULLET:  Thanks, Changfeng.  Quite interested by this new right for childrens to use technology for their own development.  It is very interesting point.

    Yes?  Okay.  I think we have a question from a remote audience.  Doaa?  You have a question?  Please two minutes, no more, because we have other things to develop.

    >> AUDIENCE:  Thank you.  I hope you can hear me.  No, I'm Doaa, I'm a problem, working with Gabriela.  I actually wanted to react to the previous questions, if that's okay.  Very quickly and briefly, I think the questions are very important and pressing because it is true as very rightly pointed out that even, you know, if we would think about a new ethical framework or a new regulation for generative AI in particular, it would take a lot of time.  And it would be indeed more wise to utilize the tools that we currently have.  Like the recommendation and other rnlg guidelines only to be used.

    But until we have more concrete things, what can be done in practice?  I think it is important to also go back to the essentials of awareness raising.

    Most people that I know are, and especially I think young people, it is very tempting to use those models, right?  Because it shortens our time, our efforts, but not too many are actually aware of the risks that are rightly pointed out by all the Panelists.

    Usually only if people would try to use generative models, you know, to ask questions that you already coined of know the answer in advance, you would see the pitfalls.  You would see the challenges.

    The inaccuracy the references to sources that are made up, and things like that.

    So I think being aware, raising awareness --

    >> YVES POULLET:  Doaa, I think we have understood what you mean.  Thanks a lot for your intervention, but I must restrict you, I'm sorry.

    >> AUDIENCE:  No worries.

    >> YVES POULLET:  Thanks a lot.  There is a question in the room?  Yeah, two questions.

    >> AUDIENCE:  Thank you very much.  As a Chad rights researcher from Germany I appreciate that we have questions about the rights and interests of young persons in this room.  But for me it is not just a question of the responsible usage of young people, persons of AI.  It is a question of the responsible usage of us all.  And much more important for me is that it is not, that is also a question of a responsible coding and designing.  I'm wondering if this could be evaluated in a process of self-regulation.  If it is not necessary to have a kind of official institution to give a permission if such an AI technology should be becoming to force or distributed to us all.

    So maybe I am not familiar with the proposed laws, but maybe we can have something about that.  Is it the right way to self-regulated by the private sector these responsible technologies?  Or do we have maybe official institution to give a kind of certificate or permission to roll it out?

    Thanks.

    >> YVES POULLET:  Thanks a lot for your question.  It is quite clear that we have already a certain labeling institution.  Your question might refer to the use of the organisation processes, solutions for responsible AI which must follow the system.  The problem is that there is not a lot in the generative AI system in standards.  The company must work on that issue very actively.

    Okay, there is another question, I think.

    >> AUDIENCE:  Thank you.  Tapan from Finland.  It seems to me that we are talking about the past.  The AI systems are no longer the purview of big tech companies only.  When you can run large language model on your own laptop and the CAT or the LLaMA is already out of the bag in that respect, basically not everybody is AI in the sense of the UNESCO document but effectively will be the developer as well.

    And I predict this will happen in about two years, it will be easy to develop your own AI models without serious technical expertise.  Everybody can be doing that.  And you cannot regulate everybody.

    You can -- okay, it would be nice if all developers would be responsible, as it were.  But if everybody is a developer, I don't see how you can make everybody responsible.  Maybe someone can.  I'm happy about that, but I don't see how that works.

    So think about the implications of people, all people, criminals, bank people, everybody, developing AI models for themselves to do whatever they want them to do.  Not just using the existing things developed by someone we can regulate.

    So what can be regulated is the question.  You can regulate commercial usage.  Official usage.  The data perhaps.  That can be used, but the development, no, I don't think you can.

    Thank you.

    >> YVES POULLET:  Thanks a lot for your statement.  I am afraid we have to go to the second part of our session and to give the floor to Fabio and Siva.  Siva is present remotely.  And I have two questions.  Recently reports on large language model is clearly the most rated in the -- the tools in many language other than predominant language in the system like English or Chinese.

    Not with standing the effort of certain states to establish big data in their own language.  And I mean for instance the Finland has taken a certain number of measures to develop data repository in Finnish language.

    More important is the fact that the generative AI system are promoting cultural inference.  How do you see a solution to that discrimination enunciated by the UNESCO recommendation?

    Second question is also the fact that the use of most of the generative AI application contrary to the traditional Internet service are based on the business model which requires payment for the proposed service.

    Once again there is a risk to see a certain number of persons excluded from the benefits of this innovation according to an inclusive scenario.

    How do you see that risk?  And which solution are you intending to solve it?

    Siva, you have the floor.

    >> SIVA PRASAD RAMBHATIA:  Thank you.  Thank you.  I have benefit from listening to the previous Panelists recommendations and questions.  Basically UNESCO --

    (Captioner is having difficulty with the audio.)

    >> SIVA PRASAD RAMBHATIA:  The cinld of issues we are discussing.  There are also general kind of solution its.  We all know how technologies actually -- Yves east is it possible to increase the volume?

    >> SIVA PRASAD RAMBHATIA:  Yes.  Am I audible now?  Yves use is it okay for you?  Please?

    >> SIVA PRASAD RAMBHATIA:  Is that it?

    >> YVES POULLET:  I think.

    >> SIVA PRASAD RAMBHATIA:  Is it better?

    >> YVES POULLET:  It is okay for me.

    >> SIVA PRASAD RAMBHATIA:  Okay.  What is important, generally any technology, technology discriminates between both the, -- those who have in terms of education and in terms of resources and I think the control and noncontrol.  This is one thing that we must remember.  That's where discrimination begins and that is the big discrimination lies in that source it sell.

    And that is in fact what artificial intelligence has done is, it created new kinds of inequalities, new kinds of devices, what we call digital divide.  Digital divide are actually coexist or they accelerate with existing socials cultural inequalities.

    That is where when we are talking about the technologies, technologies by themselves are creation of the companies or individuals or anybody else.  But they have their markers, they have their kind of ideas.  And that is doesn't really affect, they are not -- they are not very concerned about the other inclusive as a problem because it is the profit that is more important for them.

    This fact has been established very widely by scholars.  And in fact, what we find is the artificial intelligence has affected societies in multiple ways.  It has also affected the societal relation.  In fact, it has affected the social cultural ecosystems.  Whether it is through using fake news or other kinds of things or breaches the privacy, a number of things that we have discussed already.

    And given this, we must also remember that this generative models are also a challenge of ethical issue.  And that we need to focus on the issues of artificial intelligence and specifically reflecting on the marginal communities or the indigenous communities and those who are poor and illiterate, especially those from the Global South.

    In fact, this is where most of these generative models are general.  In fact, Stefaan was talking about the Bill of Rights.  Some of them are larger in terms of their application.  They are more in terms of homogeneous -- when we are talking about societies, and they are plural societies, multilingual societies, the problems are unbounded.  Even with that gender and other issues are more problematic.

    That means that when we are talking about any kind of guidelines, any kind of restrictions, any kind of controls, one has to be sensitive to all these kinds of layers of hierarchies.  And in fact, what we find is the generative models have impacted -- the larger stens of many sections.  We don't need the writer.  Somebody can replace them.  We also have the kind of issue especially let me not waste much of the time because of the capacity of -- paucity of the time.

    I want to say that the, what the AI and the dpern are models are doing here, they are creating a kind of discrimination between the humans and the societies and also between humans and nature.

    What we need to do is basically we must focus on more the local or regional specific approaches.  We must also try to, you know, use the, develop a database from the local knowledges, traditional epistemologies which are more usable for building better societies and also for finding solutions for those human problems that we have.

    This can be a good contribution to plurality and also the nature in order to build sustainable and equal societies.  That is what I would like to briefly touch upon.  I can answer, elaborate the question.

    Because the time is very short.  Thank you.

    >> YVES POULLET:  Thanks, Siva.  I like your expression the Bill of Rights for every sit ten sends, everybody in the -- citizens, everybody in the world and plural society, you come back to the idea developed by Stefaan about localism.  I think it is very, very important to hear that from you.

    Fabio, you have the floor.

    >> FABIO SENNE:  Thank you, Yves.

    >> YVES POULLET:  Just a question, is it possible to have 15 minutes more?  I am turning to the technicians.  Is it possible to have 15 minutes more?  No?  No.

    >> FABIO SENNE:  We will do the rest ...

    >> YVES POULLET:  Is that okay?  Ten minutes is okay.

    >> FABIO SENNE:  Thank you.  I will try to be very brief.  It is very easy to speak after great contributions that we have.

    I would just highlight a few points from my perspective.  And I work in Brazil in UNESCO centre also connected to the Brazilian -- represented by and producing research and data in the field.  We need to say that we don't know yet and we don't have enough data on this issue.  So there is a need for more investigations in this area.  But we do know some things that I think it is important to understanding the possible risk and the possible influence of the scenario.

    First, of course, the global digital inequalities such as the inequalities among countries and regions and how they access the Internet and digital technologies, how this can impact the quality of the training data that these models have.  Such as issues like languages.  So how such part of the lags are not represented or well represented in this models but also the inequalities within countries that also affects much the diversity of the data used.  So in the case of Brazil we know that there are persistent patterns of digital inequalities connected to race, gender, rural versus urban, income level, age, education level, so on.

    From the perspective of diversity and inclusiveness of the process, I think digital inequalities is something very important, but also from the perspective of the use of this type of generative AI tools.  So this also can be affected by or correlated with other aspects such as poverty and other vulnerabilities.

    So we know from other technologies and digital technologies that early adopters tend to benefit more when the new application is available.  And the impacts tend to be more disrupted in the early phases of dissemination of the too many when a few can benefit.  From a position of fairness and nondiscrimination this is also important.

    Finally, I think also when we talk about digital inequalities, we also are not talking about just access and sue, but also the skews, what are the differences between the abilities that users have in terms of using.  We know from the data we have in Brazil, we know that for instance when we research children use of the Internet and their skews, we though although operational skills are widespread about this population, operational skills, related to critical understanding of content, for instance is under developed among the population that we interviewed in the case of Brazil.  For instance, 43 percent of student, 11 to 17 years old in the country agreed that the first result from a survey online is the best result.  51 percent agree say that every person find the same content when searching offline and 40 percent are unfamiliar with online checking of information.  In this case we are talking about children and the need for raising awareness and literacy and AI literacy throughout the educational systems and also is an issue.

    So just to finish, I would like to call the attention for, of course, the need for data production and for research through understanding better this process.  But from the data we have, we already know that we need to face digital inequality as a matter of having an AI that is more inclusive and human centred.

    So this is my perspective for now.  And thank you.

    >> YVES POULLET:  Thanks to Fabio for this is a thought, definitely a very interesting remarks.  I think you have given very concrete indicators about what happens and the inequalities we are facing with these new technologies.

    We might now go to the question and answer time.  I don't know if there are questions.  And after that we will have go among the persons, the will Panelists in order to hear from them in one minute recommendation to address to the IGF about generative AI systems.

    So please, we have a question and answer?  Online?  There is no questions?  Perhaps Ms. Ramboza, no?  So I turn my head.  No?

    Okay, so we might go directly to the recommendation.  And perhaps I will start with Siva.  You are finished with a very strong recommendation.  Perhaps you might repeat it and so so we might write what exactly you have in mind.

    Siva, you have the floor.  One minute.

    >> SIVA PRASAD RAMBHATIA:  Yes.  My recommendation would be that when we are designing the AI, generative model we should concentrate more on the local and regional kind of issues.  For that we can think in terms of multiple aspects and also inclusiveness.  We will be excluding all sections of them, which are -- they don't form the minority.  Thank you.  Yves east thank you for your recommendation.  You are not presently in that sort of situation then because it is quite clear that if you want to create big data, you need a lot of data.  You need definitively very complex algorithmic system.  You know that most of the large language model are using more than 1 billion of parameters.  So how to develop all that.

    >> SIVA PRASAD RAMBHATIA:  Can I add to that?  That is where I was suggesting that the local systems need to be documented.  So that that can help in building this kind of models.  Thank you.  Yves east thanks a lot, Siva.

    Changfeng, do you have a recommendation?

    >> CHANGFENG CHEN:  Yes, the discussions were interesting and you spurred me to think about professionalism.  I think professionalism until artificial intelligence should be promoted.  Professionalism is a set of standards and behaviors that individuals and organisations are expected to adhere to in the workplace.  It involves demonstrating 13 qualities and characteristics that contribute to the positive and effective work environment.  Just, as adjusted to law and the fact to journalism, company aspects of professionalism include reliability, high standards, ethical behavior, respect, responsibility, team work and so on.

    So for artificial intelligence, human needed to have a real professional consent on the new technology.  Not that they regionalised in the regulations.

    Of course, we still need to respect the multicultural values but at the same time in the general technical field we need to have general thinking.  So I think AI journalism, AI professionalism can have the effect of the regulation.

    >> YVES POULLET:  Thanks Changfeng.  Mr. Bekele, have you certain ideas with regard recommendations?

    >> DAWIT BEKELE:  Thank you.  I agree with most of the things that have been said and in particular on the importance of having real responses to the question.  I believe that generative AI shouldn't be imposed on any society.  Societies have to choose how they use it.

    But I see some challenges, particularly resources.  Some countries don't have the resources to deal with these kind of problems.  And also, you know, the knowledge.

    So I think it is important for organisations such as UNESCO to make sure that everyone is empowered.  Everyone understands the issues.  And has a possibility, you know, to hand em the issues at the local level.

    Also I think the big companies also have the responsibility to support even financially the poor countries so that they decide what they take from this important evolution.  Thank you.

    >> YVES POULLET:  Thongs, Dawit.  And Stefaan, perhaps, now?

    >> STEFAAN VERHULST:  Yes, sure.  And so very shortly, I think we need to pay more attention to the fundamental principle of garbage in garbage out in relation to generative AI, which means we have to focus not just on the model but thinking about how do we actually create quality data and we be focused on the data side and be focused on unlocking quality data.

    That means that the whole agenda of open data, open science and quality statistics has actually got more, has become more important than ever because if we want to have qualitative generative AI we need to have the infrastructure.

    >> YVES POULLET:  Thanks.  Fabio?  You have the last one.

    >> FABIO SENNE:  Thank you.  Now, just to highlight also the need for monitoring and evaluation.  I think we have to foster both international frameworks.  There is the roam X exaiters from UNESCO.  There is the OACD and Secretariat of AI.  Those can be important for national and international, fostering research, monitoring an understanding the impacts of those tools that are already emerging.

    >> YVES POULLET:  Thanks, Fabio.  I think it was a marvelous transition to you, Marielza.  Thanks a lot for joining us.  I know that it is very, very early in the morning and definitely thanks a lot for being with us.  Marielza, you are the Director of the IFAP programme.  Perhaps a few words?  You have heard the expectation of certain number of persons from UNESCO.  So perhaps you have the floor.

    >> MARIELZA OLIVEIRA:  Thank you very much.  And hello, everyone.  I'm really mesessed that I can join you if it is only part of this important IGF soaks germ, unfortunately I had another, from my position, let me first warmly congratulate Yves and IFAP working group on AI ethics which is the convener on this fascinating discussion on generative AI.  This is a new technology which holds profound impacts on our society.  We need to look at that through the lens of ethics and human rights.  IFAP is a programme that supports meats in fostering inclusion, we foster universal access to information and knowledge for sustainable development.

    Information ethics is, of course, among our top priorities and IFAP has recently endorsed the new strategic plan for the period 2023 to 2029 that emphasizes the implications of these two technologies including AI to our right to access to information.

    And one of the areas of work that we have is to build capacities for and convene reflections on ethical, legal, and human rights issues that arise out of frontier technologies.  In this session, this marvelous session is an example of the excellent contributions being made by the IFAP working group dedicated to this topic.

    The application of frontier digital technologies that go from artificial intelligence including germ, blockchain, artificial reality and new technologies are profound over systems and we need to grapple with these implications.

    So what IFAP does is support and encourage a series of actions.  For example, we work on promoting research into these implications to the inclusive equitable and knowledge societies.

    Raising awareness of the sustainable development opportunities that these technologies bring, but also, you know, of the risks and mechanisms to address these risks including the impact, for example, on privacy, on the environment, and so on and so forth.

    Following the endorsement of UNESCO's first general conference on the recommendation of the ethics of artificial intelligence, which is the first global instrument on artificial intelligence, IFAP promotes the implementation of the recommends and supports regional and international cooperation, research, exchange of good practices and developments of understanding and capabilities to respond to these ethical impacts over quetion ecosystems.

    IFAP also promotes evidence-based frameworks and mountains approach towards designing and governing artificial intelligence and we certainly use the principles of the Internet universality that Fabio just mentioned which says that digital systems must be human rights-based, open, accessible, and multi-stakeholder governed.

    IFAP also serves as a platform for meats, academia, civil society, private sector to share experiences and best practices that overcome digital divides and inequalities including those different capacities to work with technologies such as germ.  We have six institutions ensuring that AI technologies are accessible and beneficial to everyone, including marginalized communities and groups such as women, the elderly, persons with disabilities and so on.

    With participating in dialogues across the imloab on on trigger discussions among all stakeholder to share the challenges, best practices and the lessons learned on these technologies.  And this is why I'm calling upon all stakeholders here today to amplify the call for human centric approaches to AI not only it is a common collective effort that we need to uphold shared values and build sustainability and equality across all known societies.

    For that I want to congratulate again the working group on information ethics, particularly Yves, which have been taking this critical conversation forward to a series of major global and regional workshops on this stop I can.  I hope you can all join the next events and disseminate the outcomes of this discussion.  Thank you very much for your insights and commitment to shaping a really more informed and evident cam digital future that leaves no one behind.  Back to you, Yves.

    (The captioner will depart in five minutes.)

    >> YVES POULLET:  Thank you for the concluding remarks.  It is a pity we have to finish this workshop so early.  I think we need more than one day for discussing all the topics we have mentioned today.

    But definitively it would be a common collective effort to address all these issues an to find solutions to all these issues.

    So I would like first to thank the technicians for their nice support.  I think it is very important.  Thanks a lot.

    And for the comprehensiveness as regards the fact that we have ten minutes more.  I would like to thank the audience, the remote audience, and definitively the persons with courage to stay here.  And derchtively I would like to thank very, very strongly the Panelists for their nice input to the discussion.

    I see Marielza that you raised your hand?

    >> MARIELZA OLIVEIRA:  No, that was applause.

    >> YVES POULLET:  Okay.  So I think we need applause definitively.

    (Applause.)