You are here

IGF 2019 – Day 3 – Convention Hall I-C – OF #33 Developing policy guidelines for AI and child rights - RAW

The following are the outputs of the real-time captioning taken during the Fourteenth Annual Meeting of the Internet Governance Forum (IGF) in Berlin, Germany, from 25 to 29 November 2019. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid to understanding the proceedings at the event, but should not be treated as an authoritative record. 

***

 

>> JASMINA BYRNE:  Okay.  Hello, everyone.  And welcome to our session about youth and artificial intelligence.  I'm head of policy in UNICEF office in policy based in New York.  Thank you all for coming today.  We are unfortunate to have a session at the same time as my other colleagues from UNICEF are having global alliance kids launch, but we have a sizeable crowd here of people I hope who are interested in what we're going to talk about today.  I'm not going to be long with introduction.  I just wanted to say we had here during the past few days at the IGF 30 years mentioned a couple of times in the context of 30 years of the fall of the Berlin wall, 30 years of the birth of the world‑wide web, but we also celebrating another anniversary this year and that's 30 years of the convention on the rights of the child.  When we talk about children and their rights in the digital age, the next 30 years are going to actually show significant differences in children's lives and in particularly in this case the development of AI technologies and AI systems.  So today's session is co‑organized with Berkman Klein at Harvard University and our office in UNICEF. 
       First speaker today is Sandra Cortesi who is director of Youth and Media program.  Then we'll hear from my colleague Steve and a few other colleagues who will give a brief commentary.  I hope we have time to discuss this very important topic.  We only have an hour.  So let's start. 

>> SANDRA CORTESI:  Thank you so much.  Jasmina, I'm glad to be here with lots of friends and colleagues.  I'm going to use my minutes to basically share with you an overview of what we've been up to at Youth and Media at Berkman Klein for the last two and a half.  You can only see me and not my slides.  Okay: 
       About two and a half years at Berkman Klein, we started a larger center‑wide project, the ethics and governance of AI.  Our approach was very much to basically have a reflection and conversation about how AI can be integrated into each existing project rather than create a whole new project.  I must admit I was not an AI expert whatsoever at that point. 
       So I had to educate myself really basically from scratch, and I first what I did was start with the basic Google search about AI.  And this is what comes out when you do a visual search on AI.  I'm a very visual person, as you can probably tell from my slides.  That I didn't find helpful whatsoever as a starting point. 
       But I said it's interesting in itself what is AI to the larger world.  It's something blue with robots, most male looking robots.  But then I did, again, obviously a larger search and went through all the big reports talking about AI and noticed even while looking at those, at the very early stage that most of them did not include youth issues within the reports, youth were rarely mentioned.  I think Steve is going to go into more detail into that. 
       It has become better.  At the top you see a few examples of newer reports that talk about young people within the bigger AI framework, but nevertheless, it was ‑‑ the information was scarce. 
       And so what we then did at Youth and Media is we spent about a year trying to collect all the evidence we could find around AI and young people and basically cover then in this report which I hope you will all read, if you haven't already.  We covered five areas that I will give you some examples around.  It doesn't mean that those are the only five areas that interest us.  That doesn't mean that those are the only ones that we ever talk about.  But it meant at that point when we started the project, those were the primary areas. 
       Just to give you the five examples because it's way too small up there.  The first one was education.  The second one was around health and well‑being.  Third one was about the future of work.  Fourth one was about privacy and safety.  And the fifth one was about creativity and entertainment. 
       Just two days ago you might have seen this one already.  My colleague Berkman Klein executive director who is with us today also wrote this piece in wired magazine that is based on some of the findings and observations that were covered in the report basically making a case for how we should include young people more in this debate from beginning to implementation. 
       So the first area that we looked at in the report again is privacy and safety.  Just to give you an example that also other colleagues will then later talk more about we looked, for instance, at the educational context and issues around privacy there.  For instance, one example came up from Asia where we saw as an example that, for instance, several schools were incorporating sensors like these, so devices like this, that should capture young people's attention while in school and through an analysis in the back end decide if young people are awake and paying attention and otherwise help them increase their learning while at school.  Also giving teachers realtime updates and parents updates in ten‑minute intervals. 
       So examples like this in the report highlight, for instance, questions, what is young people's autonomy around these new technologies and will they have a certain say when it comes to these technologies around ‑‑ especially privacy, but other areas as well.  So will they have a say about the data that is being collected?  Will they have a say in what's happening to that data?  Again, this is just a quick example.  Many more questions are in the report in this section. 
       Another area that is covered in the report is looking at health and well‑being.  And to give you there an example of what came up as we were doing the research.  For instance, we were looking at natural language processing and how it can be used, for instance, to detect if young people are not well or struggling or in distress.  I think that might be something that Caruna will also mention.  I see a ton of potential in this area.  Some of the questions that came up is, for instance, will AI systems like this, will they potentially, for instance, be able to reduce stigma around certain health issues because a broader range of young people will have access to these technologies and also the adult society may come in contact with this.  But on the other side, on the flip side the question is, will these AI‑based systems, will they be as culturally sensitive, maybe let's call it like that, to how young people express, for instance, emotional distress. 
       Some of my work is based in the United States where young people may be more open to say if they don't feel good.  While at work I do work in Latin America, there it is much more difficult to say and to speak up if you have emotional struggles.  So will those systems actually be aware and sensitive to that? 
       Within the education are countless examples.  The first one I mentioned is one of them, but the question is truly who will actually have access to these technologies?  You name it from personalized curricula to AI tutors to, you name it, who will actually get those?  Will it be as in many other cases, wealthy privileged young people, or will it be everyone?  And will it actually improve the learning and education or will it not?  What are we going to do with all the educators that also need to be trained and reeducated about these technologies and often fear or confront these technologies with a lot of fear and anxieties and the big unknown? 
       And then when we looked, for instance, at work, just again because I don't want to spend too much time, you see examples like this, bricking up a lot of fear and negative visions about the future about automation and how it might remove many job opportunities.  For us a question was, for instance, all these headlines and reports and this uncertainty or maybe not call it uncertainty but the reports are really going from one statement, lots of jobs that won't exist and no longer be there when you finish your studies to it's not that bad and it's not that extreme ( but still in the media, the discussions are making young people to a certain extent nervous about the future what their careers and pathways. 
       So I think more work needs to be done in this field to better understand it.  Same, again, that's also connected to the wired article, how can we increase also young people's interest, again, into entering career paths that will then hopefully shape these new technologies? 
       Across all topics I think the biggest theme that we are concerned about at Youth and Media is the question of inclusion.  Inclusion of young people but inclusion of different perspectives, thinking about inequalities and inequities, and will these new technologies be able to reduce some of these or actually make the gaps even bigger. 
       I'm excited Steve is also going to talk about the work we're now doing together around AI principles, guidelines, going from very theoretical things to more practical things, helping institutions, organizations, governments to think through this process and how can you actually do work in this space being more mindful also of young people and their rights. 
       So, again, I personally have a ton of questions.  I'm excited about your inputs and your questions and hope we can debate that either here today or then online or in different parts of the world.  So thank you so much for being here. 

(Applause).

>> JASMINA BYRNE:  Thank you so much, Sandra.  As you mentioned, we're going to move on to hearing from Steven Vosloo who is the policy at UNICEF and my colleague.  He will tell you about the initiative that we just started in collaboration with Berkman Klein with government in Finland with the World Economic Forum, IEEE and five rights organizations and a number of other organizations and experts that for the first time gathered in June this year to talk about how we can develop a set of policy guidance for industry and for governments around children's issues and artificial intelligence and how to turn the principles into practice.  Over to you, Steve. 

>> STEVEN VOSLOO:  Thanks, Jasmina.  I was this weekend in hell sink we had the first workshop into developing these policy guidelines.  I wanted to share with you two slides that my colleague presented because they were quite good at making the idea of AI an abstract idea quite concrete. 
       We're going to meet two people.  This is Emma.  Emma is born today so she will be 11 years old in 2030.  Let's think about the kind of future we want for Emma and we'll look at Emma's sister.  It's a mental exercise but bear with us. 
       So Emma was born with a very rare cancer but thanks to AI power technology it was diagnosed early enough and has a normal life.  She loves school and learning.  She happens to live in a rural part of Finland.  Let's use the Finland example.  Thanks to AI power technology and technology in general she can have access to virtual tutors and have access to all the content and support that she needs. 
       She's also living in a world where greenhouse gases have been reduced somewhat.  This is very wonderful Utopia, but it's good to aim high.  That was thanks to machine learning and again AI‑based systems to help us use energy more efficiently.  Her dream is to run a tech company.  She's ‑‑ she loves technology.  She's been hemmed by technology.  So she has that opportunity. 
       She's basically living in a world where all the people have access to virtual content and opportunities.  Right now we know only half the world has access and is online.  So she's online the whole time but she feels empowered but not dominated by her online life.  She's aware when a machine makes a decision over her life or when it's a person.  She knows regardless she knows there's somebody she can talk about decisions or there's some root of accountability.  No one uses her data unless she consciously shares it.  She feels she trusts technology. 
       This is Emma.  Okay.  She's born today. 
       So this is Emma's sister.  She's ‑‑ sorry.  Wrong slide.  Thank you.  This is Emma's older sister.  She's 15 years old today.  She also loves technology.  In Finland, not the whole world is like Finland.  But in Finland 97% of kids between 9 and 17 have a smart phone and spend most of their time online.  We know most users are children and Emma's sister is one of them.  She's already using AI systems today through her Facebook news feed, through her Instagram face filters that uses facial recognition and through Alexa.  She uses Alexa a lot.  She asks Alexa questions she perhaps doesn't ask her parents. 
       She doesn't know which technologies are using AI and why.  She doesn't feel like she has control over her data.  Her data is kind of stored and sold and used in the same way that adults use because she's using the same systems that her parents and her peers use. 
       She has friends who go to the schools, friends abroad that Sandra showed a picture of where there are cameras in the school which is not only for security which is noble but to also measure concentration levels and learning levels. 
       So she doesn't feel like she has any special protections in her current situation.  So these are the two personalities we're looking at.  This is where we are today.  This is where we would like to be in 2030.  That's kind of an inspiration for the policy guidance to think about what kind of changes do we need to make today to create these two different spaces. 
       So we know we need to focus more on children.  As Jasmina said this is the 30th anniversary of the convention on the rights of the child.  Children are the most vulnerable part of the population.  They also have the most potential.  We have the universal declaration of human rights but on top of that we had the convention on the rights of the child because they have special needs and special requirements.  That kind of thinking of special protections needs to be carried over into the digital world.  I know that's what many of us are working on here. 
       These are the four main types of rights that we have in the convention.  I won't go into those now.  The good thing is we don't have to start at the beginning.  There's a lot of great work being done.  Here are some examples from UNICEF.  There are many that have been done to protect children and to think of those special privileges for children, designing child rights is one. 
       This is a report that Sandra also had up.  It was done by Berkeley University and it's commissioned by UNICEF.  It looks at how children uses AI systems and how it impacts on rights.  It has great recommendations at the end for corporations, educators, for parents.  The one that caught our eye was for governments about integrated children rights into national plans.  We wanted to see what they have to say about children or children's rights or what they don't say. 
       This is a report that was produce beside a year ago which is CIFAR, a Canadian research institute.  Looking at countries that have AI strategies or developing those.  They divided in their analysis ‑‑ they created these eight policy buckets. 
       You can see quickly that a lot of the attention goes on to research and industrial strategy and the next kind of cohort of talent or inclusion or using AI for good in government. 
       So we continued this research and we wanted to look at these same documents but from a child lens perspective.  These are the four main buckets we chose.  What does it say about cultivating children as a future workforce.  Children in a changing world, that's teaching children to be conscious users of technology and about the ethics and design principles.  Protecting children's data privacy and rights, that can be education or in health.  We found a fairly light touch.  If the colors were dark, there would be a significant amount of attention paid to those topics.  It's a fairly light touch for children. 
       We looked at 19 ethics principles.  Again, maybe this isn't surprising but here most of the focus is on protecting children's data and privacy, which is great but perhaps not so strong in the other areas. 
       So really just it was confirming what Sandra had found and what many others have found which is there isn't much attention given to AI in children.  But we feel there needs to be.  The policy guidelines want to focus on that niche of where does ‑‑ how do AI systems, how do they impact children and how we need to think as technology providers, as governments, at the UN, as Civil Society, think about what kind of provisions and protections and empowerments we need to put in place.  
       So our plan is to develop this guidance and to pilot this guidance over two years.  We will host workshops like the one I just held in hell sink two days ago.  There are more coming up in Asia, Africa and Latin America.  We'll do a high level next June in Helsinki.  We want to consult we're working with a number of partners who are advising us at this stage.  Berkman Klein Center, five rights foundation, World Economic Forum, IEEE.  This is our advisory board but we want to get as many inputs as possible. 
       I'll leave you with this.  These are some draft principles.  These are like new principles.  You will recognize them if you're familiar with the AI principle space at all.  I keep hearing at conferences that there are now over 200 different AI principles and ethics which is great and lucky they're all starting to look the same like this.  These are the kind of the headline transparency, accountability, protection.  What we've tried to do here is to think of the human adult which is usually the person that one thinks of and think of the human child, and what does that mean for transparency or inclusion when it's a child. 
       So please join us on this journey.  Come chat with me afterwards.  And let's see how we can get as many inputs as possible into making this kind of work as kind of comprehensive as possible.  Thank you. 

(Applause).

>> JASMINA BYRNE:  Thanks so much, Steven.  Hopefully next year at this time at the next IGF we will already have the first part of our journey completed with this policy guidance being drafted.  And we just want to say we discussed earlier before this session with our colleagues from Berkman Klein that we want to make them as practical and pragmatic as possible with as many examples of how do you actually apply these principles and practice and in different sectors and in different areas of a child's life and well‑being. 
       Now I'm turning over to our commentators who are going to give us five minute comment on what they've heard but tell us more about their work and their involvement. 
       So the first person is Armando Guio who used to be advisory to the Ministry of ICT in Colombia. 

>> ARMANDO GUIO:  Thank you, Jasmina, thank you, Steve.  And thank you to the Berkman Klein Center for organizing this panel giving me the opportunity to show you a little bit more about experience in Colombia.  I had the opportunity to advise the Colombian government in the design of its AI strategy.  This is the first AI strategy no South America.  It's CONPES 3975.  They are approved by the president and the cabinet.  This is a very good example of how to try to ‑‑ put into practice the principles and the ideas that had been shown by acted ma and by organizations as the UNICEF. 
       I have to say that the youth and artificial intelligence and the report at the Berkman Klein Center was influential for us because it set specific points that we considered very important to design this kind of strategy.  So the future of work that was essential, education.  We knew that AI plan, AI strategy for a country without a clear talent and education strategy ‑‑ well, it was not going to be very effective, and that is also in the report.  I think we need to consider.  At the same time we have these very big concern on how to develop and involve children in these more creative industries and creativity.  We leave that creativity as it is said in the report is very important for this new stage of the industrial revolution. 
       I think one of the most interesting things about the report is that it says this quote I have here in the presentation is like obstacles to the digital cooperation of AI powered technologies in underresourced schools and underrepresented homes could exacerbate existing gaps within the youth population with respect to access to AI systems and the skills to utilize them. 
       That was one of the biggest concerns in Colombia, a developing country, of course, that we didn't have enough resources to provide to the students to schools and that we have to do something because AI, of course, is going to be influential in the world and we want Colombia and Columbians in the future.  So we had to do something about that. 
       That's why the main guidelines of these strategy that also answer to many of the topics and issues that were in that report.  I can describe mainly four points on the strategy now that Colombia is implementing that are essential.  So we have, first of all, developing skills and implementing AI in the schools.  So the first thing is that we believe that it's very difficult to say like this is the set of skills that is required in order to deal with AI.  AI's changing all the time.  So we have to be up to the experimentation stage.  We have to be open to discover the technology is changing, that teachers, students have to evolve and have to adapt to all these technologies.  And that's why the strategy says like we have to promote like these kind of experiments in schools.  And that's something challenging.  That's something new, but that's something we believe was quite needed in the Colombian education system.  We think that's a good example for Latin America that we have to consider a new approach to education and a new approach in which we still don't know the whole set of skills that is required. 
       At the second time we wanted to promote creativity through nonconventional learning environments so the Ministry of education was summary committed.  There were nontraditional educational environments that were used and now implemented in Colombia getting out of the traditional system and trying to promote different models and methodologies co‑creation letting children be more involved in the construction of the curriculums.  That was one of the biggest steps that we have in the strategy. 
       Then we have another point that is to identify and support children with high performance capability, and this is going to be a very important task.  We believe that high performance students especially on some specific topics such as basic sciences, math, they're going to be very relevant to design and develop AI and to help Colombia and Latin American countries to develop their own AI systems.  But we need to identify as soon as possible these children, but also to be careful not to discriminate children that are not having these perhaps high performance. 
       How are we going to answer to the expectations of those that have high performance and those that are not having one?  And that's also related with this fourth point and that is the future of work. 
       What is going to happen with children that perhaps ‑‑ the job population that's considered that they don't have all the skills required in order to be productive in the future.  And that's why we believe in strategy there is a specific action of cooperation and collaboration with the private sector.  We want to understand from the private sector which are those skills required, what our children are going ‑‑ skills they need to develop in order to become productive.  And we can only know that in the private sector is also telling us which are those skills.  So we believe that that was very important in this strategy.  And that's why we also consider that we are still learning.  We don't have all the answers, but this is the first step in order to start developing those skills and for children in Colombia to start getting the proper education and the proper knowledge in order to face the industrial revolution and to deal with AI systems. 
       As I said, this is the Conpes document.  That is our digital transformation strategy and artificial intelligence strategy.  In the illustration you can see the idea is to reduce inequalities and gaps and to try that all children in Colombia can benefit from AI and not just some, the resources in order to interact with this children but we want children in Colombia because of children's rights and we believe in the convention on children rights and they will have the best possible education and the skills that will help them in the future. 
       That's mainly the strategy and that's what Colombia is working on now in the implementation of these. 

>> JASMINA BYRNE:  Thank you so much. 

(Applause).

>> JASMINA BYRNE:  Great to hear about Colombia example.  Now I'm turning to Sabella, you're a fellow from Berkman Klein Center.  Mic over to you. 

>> Thank you so much.  It's such a pleasure to be here ( and an honor to share with you an African perspective on how we expect this technology will affect our livelihoods.  I'm thankful for the opportunity to do some of this research at the Berkman Klein Center and to work with some colleagues here as well.  It's a pleasure to be here. 
       When we're looking at the African continent, almost half of the population is under 18.  That's a significant amount.  And 70% of the population is under 30.  And in 2026 sub‑Saharan Africa will have the greatest amount of children under 18 anywhere in the world.  In addition to this, we're facing extreme challenges.  Climate change has devastated the livelihood of the continent and the well‑being and security of the children.  As we're looking at the informal employment almost half of the population is working in vulnerable employment. 
       So this makes us wonder, what is the future of work in the continent?  Will the children, will the youth have jobs within the economy, within the contents, or will we see this continuation of young Africans taking dangerous routes to go to Europe, go to North America to try to find better livelihoods? 
       So this is of great concern and something we have to look into.  Other disparities, of course, exist within healthcare, within education, financial and digital inclusion.  And within these disparities we see gender inequality.  There's a disparity between young girls and Wim and how they access and use the internet how they use mobile devices and how they use technology. 
       The use of AI has the potential to either worsen these disparities or to try to resolve them.  And to do so, to fully realize the potential and the benefits of AI, we must seek the potential of AI while keeping in mind the potential harms of algorithmic technology.  Also the underlying power of symmetries that exist within the continent. 
       In the spirit of Ubuntu which is a philosophy that we uphold in the African continent we must seek the potential with meaningful dialogue with children.  I was excited to hear some of the speakers talking about dialogue inclusion, openness, and treating the youth as experts who can, you know, influence how we build AI technology. 
       So that's very heightening.  This includes collaboration.  It includes cooperation, openness.  We must empower children to become successful members of the community.  In the African continent and elsewhere it is often said it takes a village to raise a child.  We are that village.  We have a responsibility to the youth, a responsibility to their success, and a responsibility to create a more equal society including in how we respond to climate crisis, in the creation and also in the creation of safer internet platforms.  So I'm excited to hear what will be said. 
       Excuse me.  We have a responsibility to the youth and to ensure that the future generations have a society and environment where they can grow and fully realize their potential and human dignity as being humans.  Thank you so much. 

(Applause).

>> JASMINA BYRNE:  Thank you so much.  This is very inspirational particular for us at UNICEF.  One of the things I would like to emphasize here in the process of developing this policy guidelines we're working closely with Berkman Klein Center carrying out a series of consultations with young people and children because we see those not only those that can benefit from AI and a very important stakeholder group in these discussions.  As I said, next time at IGF we'll have some of these young people with us but maybe we'll show you a short video, one minute after Steve finds it following our next commentator and speaker, Caruna nine who is the global policy lead with Facebook. 

>> Thank you, Jasmina, thank you Sandra, Steve, having me here.  I wanted to use my five minutes to talk about two specific examples of how we use AI on Facebook and on our platforms to A, keep children safe, but to make sure we're giving people an appropriate or customized experience on hour services. 
       So let me start with the example on safety.  Many of you in this room will know of our work in this space, keeping children safe is something that we consider one of our foremost responsibilities.  And we ‑‑ sorry, I'll get closer to the mic.  Okay.  Let me try. 
       So we've been using technologies for many years to detect previously reported images that are uploaded on to our platforms to make sure that we take them down at the time of upload and file a report with the national center for missing and exploited children.  Last October we announced that we're able to use the power of machine learning and artificial intelligence in two additional ways to keep children safe.  First we're able to use the power of AI and machine learning to detect previously unreported images of child nudity.  So‑so far we were only able to detect content that had already been reported.  This is a huge step forward because now we can detect content that hasn't been reported on child nudity.  At Facebook we have more restrictive community standards on child nudity.  Each if that image has been shared because of parent or grandparent said they're cute.  We woulder on the side of caution to take down that down.  Because we're a social network because other people could use it in ways not intended.  If it's exploitive it's been shared in an exploited context we would file a report with the national exploited children. 
       The second use of AI and machine learning in this space is to detect accounts that are engaging in potentially inappropriate contacts with minors.  As you can imagine, if we find these accounts, the machine learning flags these accounts.  It is sent to a human reviewer to check and investigate what is going on.  If we find there is something going on which is exploitive in nature.  Again we would file a report with the national September for exploited children.  This is how we keep Facebook safer and the internet safer as well. 
       The second example I want to share is more around customization or giving people experience that is he pleasant on Facebook making sure that they can personalize Facebook.  For those of you who use Facebook, when you come on to our platform, you usually go to your news feed.  The news feed where you see the posts, photographs, links that your friends, family have shared, pages you followed on Facebook, news accounts that you've chosen to follow.  The news feed is based on signals that you've been giving us.  We use three signals to help us prioritize what content we should be showing you in your news feed.  Number one, who posted it.  Are these people you tend to engage with more on our platform?  Number two, the type of content, is it a photo, a video, a link, an ordinary post.  Maybe I'm living in India where internet bandwidth is very low in that neighborhood so Facebook doesn't want to try to show you a video which would be a terrible experience.  It would be buffering and wouldn't be great.  That's the second indicator we want to use.  Third is your interaction with similar posts of the if you're engaging with those kinds of posts, we want you to show similar content.  Because you've given us a content that's the type of content you want to see. 
       We've been focusing on two specific things here, one, giving people more control.  Not only can you customize these preferences by going into your preferences menu on Facebook.  You can do it at the point of every post.  Right next to each post you can click on the dropdown menu and tell Facebook you don't want to see similar content going forward or you can check why is Facebook showing me this?  We want to give you more controls and transparency around that.  Any time we make a big change in our news feed ranking we put out a post and make sure we're being transparent around it.  We made a huge step forward at the start of the year we're going to down rank click bait which you clearly told us is not high quality content.  You don't want to see us.  We want to make sure people know when we're making these steps forward and put out transparent information about those.  We're focusing on giving people controls and being transparent around how these tolls really work so you have granular level controls and can customize these experiences for yourself.  I'm hoping with these two examples it frames the conversation for a practical thought around what these policies should be.  Thank you. 

(Applause).

>> JASMINA BYRNE:  Thank you, Caruna for sharing this.  I know because of the first example you gave us, the reporting of child abuse images have increased dramatically and it helped identify children who were victimized but to also catch the perpetrators with law enforcement services. 
       Do we have a video?  So we have a short one‑minute video where you will see a youth activist who actually attended our workshop in June talking about AI and children.  Then we'll open this for discussion. 
       (Captioned video playing).

>> JASMINA BYRNE:  Okay.  That's it.  We even have a hashtag.  If you're tweeting about this session, please use this hashtag and get in touch with us.  But now we have about 20 minutes for questions and answers and any comments.  Please introduce yourselves and ‑‑ yes, we have somebody over there.  

>> AUDIENCE:  Hi, my name is am Taub I work in India in online safety with an organization called social media matters.  Thank you for the amazing presentations.  Something in the first slide is pretty much where my question is, how does the panel see the role of parents in this extremely important right of children as AI still remains a very magical term for most of the parents out there.  When it comes to children, especially in India, children rights start at home and parents play a very important role in it. 

>> JASMINA BYRNE:  Thank you.  Should we take a couple more questions and then turn back to the panel. 

>> AUDIENCE:  Thank you.  Very interesting.  So I work or ICPA international.  Our mission is to tackle the exploitation of children.  Basically we're looking at harm online to explain to you why I'm going to ask the questions that I'm going to ask.  So we do acknowledge that there are tools, AI‑based tools that are being developed.  This is, of course, to be welcomed and the ones you were mentioning to Caruna.  We want to anticipate those that are of harm.  In parallel to that I think we should look at broader governance issues related to those tools and basically what we're doing here, we're moving towards a system where we're delegating human decision making to algorithms.  And when it comes to children, I think this deserves to have a closer look at what are the implications for children.  And I believe that's what we're all doing collectively. 
       There is a particular topic that is worrying me which is bias specifically ‑‑ that is at the heart of AI systems.  When we think of the solutions and the type of harm that is done to children, that data is available at country level, that's the basis, for example, images.  I think of the database of Interpol of images and predominantly portraying victims with ethnic profile that is predominantly white say for offenders coming from the western world because of different factors because those that are feeding this database are predominantly from the western world.  So if, for example, Interpol would develop an AI tool, then that data would feed this tool would be influenced by what is portrayed on the images.  What would be the implication, perhaps, you know, they wouldn't be able to detect other ‑‑ children who are from other regions and offenders.  This is a very practical example.  This is one of my worry. 
       The other one and the last one, sorry, to finish is that I'm observing a trend where companies are, each one of them like we have Google, we have Facebook, et cetera, developing tools, which is very good, but I'm sensing this is also moving us towards a world where there is a fragmentation of the protection of children.  In a way I'm wondering if we should create a pool of open sourced tools that we would, you know, agree, all of us agree that is for common good.  Instead of each company using those tools, developing it, then we could share it and make it available in other regions for other companies.  Thank you. 

>> JASMINA BYRNE:  Thank you so much for these suggestions and comments.  Is there any other question.  Two people in the back.  And we'll turn to the panelists:  You don't have a mic. 

>> AUDIENCE:  My name is Clara.  I'm from IEEE.  My question would be for Facebook.  If particular you make a distinction between users for user being a child if you're recognizing a child being a child, and if you make any distinction in the way you're handling data of the children and lastly, if there is a way once that child gets adult to set it to zero, for instance, so let's say a profile that is being set up as a child or a teenager is not perpetuated once that child becomes an adult? 

>> JASMINA BYRNE:  Thank you. 

>> AUDIENCE:  Thank you very much.  My name is Anjun, I'm from UNICEF.  I want to react to Mary's point.  My initial question to Caruna is when you used detection using AI of not known images.  Obviously the algorithms will be trained enough with data sets to understand that it's a child who's ‑‑ maybe the nudity plays a significant part in the decision making.  As mayor yell La pointed out, there would be human intervention required.  And I'm sure that's the current policy to not only rely on AI.  But my response to Mariella's second bit is I defer slightly in opinion in terms of how Interpol is managing that database because it's already ‑‑ I don't think the purpose of the AI on the existing database would be to, you know, to find new images because they are not using AI to detect similar images out there. 
       So I'm not completely sure how that conflict would come in in terms of existing database with known images, but I completely agree with you that when it comes to scouting the web for detection of unknown images, and if that's something that Interpol or agencies will want to implement going forward, we need to be cautionary in terms of using AI alone.  Thank you. 

>> JASMINA BYRNE:  Lots of questions for Facebook.  Let's turn first to Steve and Sandra with the question about parents and also if you want to add a little bit about these other issues that we discussed around data and tools for detecting, protection, what are your views? 

>> SANDRA CORTESI:  I mean, I find the question very valid, and it's an important question.  I do think that we have to look at the whole ecosystem.  It can't just be the focus just on youth but youth at the center and then looking what are kind of the human and support structures around that young individual where obviously parents are key but also educators are key and broader society.  I agree with you that in some cultures parents have to be involved and have to be there probably from the beginning, but in other cultures it might be other adults who are in charge of raising the child or this young person. 
       I must say, for instance, at Youth and Media we also do significant work in communities that it's more complex.  You may have different adults that are distributing by whom your race.  I'm sometimes cautious to overemphasize the role of parents because around the world the structures are very different, and the situations are very different.  If we overemphasize that, we may again disadvantage those that are already disadvantaged to a certain extent.  But I think the question is very valid.  And adults around youth are extremely important. 

>> STEVEN VOSLOO:  Just very quickly, I'll just build on what Sandra said.  The ecosystem, there's a human ecosystem but there's the technology ecosystem and economic ecosystem.  Your point about the data not being representative of all children in the world or all people is very valid.  That comes down to the access, the digital divide in a sense and the different levels of access, the different levels of cost.  And so that data doesn't exist and our colleagues in New York and innovation team they are trying to address this because they need a lot of that data for the actual AI systems to improve services for children. 
       So beginning to look at data commons or data pool where data is safely treated but begin to address that.  Just quickly your point about delegating responsible to algorithms, that's a really good point.  Hopefully these principles of transparency and accountability are some buffer against that.  I think Facebook does it very well where you use the algorithm for what it's good at.  Then when you need the human, you bring in the human.  So hopefully we always keep those kind of safe measures or protective measures in place to make sure that we just don't give over completely where we shouldn't. 

>> JASMINA BYRNE:  Caruna? 

>> I'm going to build on what Steven just said.  I think the volume in which the internet is being used and content being generated, there's a space for AI to help to ensure that the human intervention happens at the appropriate time.  One of the ways we use AI, for instance, Sandra mentioned like suicide prevention, within the reporting news there's a lot of noise.  Can we use the power of AI to determine if there is a post that is more bring it to the top of the stack so the human reviewer gets to it first and not too late.  I think there's a space where combining the power of AI and human intervention to get people help at the appropriate junctures. 
       I received questions around data bias and delegating which we just and the protection of children and how we're building the external ecosystem and finally around data.  I'm going to try to get to each and every one of those. 
       Let me start with the data question.  Facebook and all of our platforms we don't allow people below the ages of 13 to set up accounts on our platform.  We rely on stated age but we have some protection systems built in.  If someone tries to give us an age which is below the age of 13, we'll give them a message they're not eligible for a platform yet.  They'll try and reenter a different age to game around our system.  Because we can use cookies we can let them know that we know they've already tried and they're not eligible to set up an account on our platform.  We try to make sure that we are getting to people only above the age of 13 but we do rely on stated age. 
       I think two years back we did a whole series of round tables around the US to hear from parents about what their needs around how do they see tech playing a role in their lives and fostering the parent child relationships.  Parents that work three jobs, two jobs, they use technology to stay connected with their kids.  We wanted to hear from them what they're looking for.  We heard a lot of feedback from parents they were lending their phones to their kids to speak to grandparents.  We heard from military families where they use messaging services to stay connected with their kids when they were sent out on data.  We law firmed something called messenger kids because we wanted to build for parents.  We've taken a small approach for messenger for kids.  It's meant for younger children, below 13.  We want to be collect very minimum data.  Messenger kids is stored separately and kept apart.  The goal is to make sure we're responding the needs of parents. 
       Each country we launch we want to hear from families what their needs are and catering to what that community wants from technology.  The reason I give you that example we take a different appropriate versus when we're building from 13 to 18.  This is a constant conversation we're having around what is age appropriate?  Because children can be developmentally different as well. 
       I'm going to jump to my next question because I think you added valid questions around data bias and bringing in an external ecosystem and make sure we're not fragmentation of child protection.  At least I can speak the ones we're operating at Facebook are global.  Four out of five people that use Facebook comes out of the US.  The data we use to train our artificial intelligence and machine learning system reflects this diversity of our user base.  It is something we're constantly being very conscious of because we will not be able to catch the volumes of content which we're proactively detecting if we're not training it on diverse data sets.  We work with organizations like the national September for missing and exploited children who in turn work with00 countries through their VPN system.  We need a global approach because some of these crimes cross borders.  We do not want to recreate the wheel because the burden on us becomes really high if every country has a different system.  We're constantly thinking about how we can build out that external ecosystem. 
       One of my favorite programs at Facebook is the child safety hackathon where we bring together the nonprofits that work in this space to think of what the next generation of technology is.  Facebook is open sourcing what we use in we photo and image hashing we use smaller organizations nonprofits don't need to create these technologies and can just leverage our work in this space. 
       So I hope I answered all of those questions.  I sped read through those the last ones. 

>> JASMINA BYRNE:  We have one minute. 

>> I did the data one first. 

>> JASMINA BYRNE:  Sorry for interrupting.  Sorry we will ‑‑ hopefully you can talk to our panelists after the session.  I want to give the floor to Sabella and Armando. 

>> I wanted to emphasize what the colleague said earlier about the harms and the power structures and symmetries in the western influencing technology.  What I would like to first start by to fully trek our youth from birth to daily life using increasing powerful technologies.  What does that mean?  Are we eventually turning them into commodifiable through all this trekking through birth throughout entire livelihoods.  To reiterate your point in order to create AI with the youth, we should in general level the playing field.  Most of the large Internet platforms that dominate the western and designs and how certain ways of seeing the world.  AI in the global south must be localized even open sourced.  We must consider the economic power symmetries when large platforms offer and offer free internet services.  What does that do to creators who want their own platforms.  It becomes a dangerous place where a few dominate the experience of most of the internet population. 

>> JASMINA BYRNE:  Thank you.  Very, very good point and valid. 

>> ARMANDO GUIO:  I think this panel has been very helpful and also consider for the governments Latin America like the Colombian government to consider the role of the children in design of policies and implementation, how are we going to listen to them?  How are we going to create those spaces?  I think that's something we have to consider also and to build some more of a pragmatic approach to the principles and to children rights that that's something definitely we need.  Of course, this whole issue that has been discussed about privacy.  It's very important for us.  We're also concerned about it. 
       But we think it's very important that especially on children's privacy there is a lot of evidence and we want policy‑based evidence.  It's important to have case studies that we have more information when we're thinking about regulation or specific ‑‑ yeah ‑‑ measures from the government coming into children's protection.  So that's something that we are also very aware of and we're expecting to have all that evidence and data that will be very helpful. 

>> JASMINA BYRNE:  Thank you, everybody.  So we've come to the end of our session.  But this is really just the beginning of adjourn.  And I hope you stay in touch with us, provide your comments, inputs.  You can talk about this.  Steve cleverly did not put his email there so everybody will be writing to you, Sandra.  I want to say that I hope the next IGF and other forums like this one and Congresses and conferences when we discuss AI we also discuss children because there are several panels here around AI and this is the only one that addresses the issues related to children.  We only want to talk to like minded people that come to our sessions but we want to be part of the opening panels and we want to have more children's voices included in the main stream programs of IGF.  I just want to say this has also been a very gender balanced panel, if you include me is three women and three men which is also a novelty at IGF.  Thank you all for your participation today. 

(Applause). 

Contact Information

United Nations
Secretariat of the Internet Governance Forum (IGF)

Villa Le Bocage
Palais des Nations,
CH-1211 Geneva 10
Switzerland

igf [at] un [dot] org
+41 (0) 229 173 411