IGF 2023 - Day 2 - WS #469 AI & Child Rights: Implementing UNICEF Policy Guidance

The following are the outputs of the captioning taken during an IGF intervention. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid, but should not be treated as an authoritative record.

***

 

>> VICKY CHARISI: Okay.  Good afternoon, everybody.  Welcome to the session on UNICEF implementation, UNICEF policy Guidance for AI and Children's Rights.  This is a session where we are going to show how we our extended team tried to implement some of the guidelines that UNICEF published.  I would like to welcome the online moderator, Daniella DiPaola.  And she is going to help for the online speakers.  And here we have also I would like to invite Steven Vosloo and Randy Gomez, my co-organizers, to come on the stage and we can set the scene to start the meeting.  Thank you. 

          First, let me introduce Steven Vosloo.  Steven is a digital policy innovation and tech specialist with the focus on emerging technology and currently he's a digital policy specialist for UNICEF based in Florence Italy.  Steven was the person behind policy guidance on AI and Children's Rights at UNICEF.  And you can probably explain more about the initiative.

     >> STEVEN VOSLOO: Good afternoon and good morning to those online.  It is a pleasure to be here.  I'm a digital policy specialist with UNICEF and I spent my time at UNICEF looking at the intersection mostly of emerging technologies and how children use them or are impacted by them and the policy.

     So we have done a lot of work around AI and children.  Our main project was started in 2019 in partnership with the government of Finland and funded by them and they have been a great partner over the years.  At the time of 2019, AI was a very hot topic then, as it is now.  And we wanted to understand if children are being recognised in national AI strategies and in ethical guidelines for responsible AI and so we did some analysis and found that in most national AI strategies at the time, children really weren't mentioned much as a stakeholder group.

     And when they were mentioned they were either needing protection, which they do, but there are other needs.  Or thinking about how children need to be trained up as the future workforce.  So not really thinking about all of the needs, unique needs of every child and their characteristics and developmental journey and their rights.  AI ethical guidelines.  We didn't look at all of them but generally found not sufficient attention being paid to children.

     So why do we need to look at children?  Well, of course at UNICEF we have the kind of guiding road map is the Convention on the Rights of the Child.  The children have rights.  They have all of the human rights plus additional rights, as you know.

     One third of all online users are children.  And in most developing countries, that number is actually higher.  And then thirdly, AI is already very much in the lives of children.  And we see this in their social apps.  In the gaming.  Increasingly in their education.  And they are impacted directly as they interface with AI or indirectly as algorithmic systems kind of determine health benefits for their parents or loan approvals or for welfare subsidies.  And now with GenerativeAI, which is the hot topic of the day, AI that used to be in the background has now come into the foreground.  So children are interacting directly.

     So, very briefly, at the time after the initial analysis saw the need to develop some sort of guidance to governments and to companies on how to think about the child user and as they develop AI policies and develop AI systems.  So we followed a consultative process and spoke to experts and the world.  Some of the folks are here.  And we engaged children, which was a really rich and necessary step, and came up with a draft policy guidance. 

     And we recognise that it is easy, fairly easy to arrive at principles for responsible AI or responsible technology; it is much harder to apply them.  They come into tension with each other.  The context in which they are applied matters.  We reached a draft and said why doesn't anybody use this document and tell us what works and what doesn't and give us feedback and we will include that in the next version. 

     We had people apply it and we closely with eight organizations, two of them are here today, Honda Research -- and Judith is on her way.  And basically said apply the guidance and let's work on it together in terms of your lessons learned and what works and what doesn't.  That is what we will hear about today.  It was a really real pleasure to work with, you know, JRC and Honda Research Institute and to learn the lessons.

     And so yeah, just in closing, AI is still very much a hot topic.  Incredibly important issue or technology to get right.  Increasingly in the Rights of Children, like I said, with GenerativeAI there are incredible opportunities for personalized learning, for example and for engagement with chat bots or virtual assistants.  But the virtual assistant could also give you poor mental health advice or you could tell it something that you are not meant to and there is an infringement on your privacy and your data.

     As the different governments now try to regulate AI and regional blocs and the UN trying to coordinate we need to prioritize children.  And we really need to learn from what is happening on the ground and in the field.  It is a real pleasure to kind of have these experiences shared here as inputs into this important process.  Thank you.

     >> VICKY CHARISI: Thank you so much, Steven.  Indeed, and that point, we had already some communication with UNICEF through the JSF of the European Commission.  But already we had an established collaboration with the Honda Research Institute in Japan evaluating the system and different technical point of view trying to understand what is the impact of robot on children's cognitive processes, for example, or social interactions et cetera.  And there is an established field of child robot interaction.  And that was when we discussed with Randy to apply for this case study to UNICEF.

     And I think Randy now, he can give us some of the context from a technical point of view what this meant for the Honda Institute and his team.  Randy?

     >> RANDY GOMEZ: So as what Steven mentioned, so there was this policy guidance and we were invited by UNICEF to do some pilot studies and to implement some and test this policy guidance.  So that is why we at Honda Research Institute developed technologies in order to do the pilot studies.  Our company is very much interested with looking into embodied mediation where -- where we have robotic technologies and AI embedded in the society.  And as I mentioned earlier, as a response to UNICEF's call to actually implement the policy guidance and to test it, we allocated a significant proportion of our research resources to focus into developing technologies for children.

     In particular, we are actually developing the embodied mediator for cross cultural understanding where we develop this robotic system that facilitates cultural interaction.  We develop this kind of technology where you have actually the system connect to the cloud and having a robot facilitates the interaction between the different groups of children from different countries.  And before we do the actual implementation and the study for that, we through the UNICEF policy guidance we tried to look into how we could actually implement this and looking into some form of like interaction design between children and robot.

     So we did deployment of robots in hospitals, schools, and homes.  And we also look into the impact of robotic application when it comes to social and cultural economic perspectives with children from different countries, different backgrounds.  And we also look into the impact of robotic technology when it comes to children's development.  So we tried some experiments with a robot facilitating interaction between children and some form of like game kind of application.

     Finally, we also look into how we could actually put our system and our pilot studies in the context of some form of standards.  So that is why together with JRC with Vicky, we look into applying our application with the IEEE standards.  And with this, we had a lot of partners.  We built a lot of collaborations.  We are here actually and we are very happy to work with them.  Thank you.

     >> VICKY CHARISI: Thank you so much both of you.  This was to set the scene for the rest of the session today.  This was quite a journey for all of us. 

     And around this project there are a lot of people, a great team here, but also 500 children from 10 different countries where on purpose we chose to have a larger cultural vulnerability.  We have some initial results. 

     And for the next part of the session, we have invited some people that actually participated in the studies.  So thank you very much both of you.  I would like to invite first Ruyuma, one of the students. Thank you.  Ruyuma, you can come over. 

     Ruyuma is a student at the high school here in Tokyo.  And you can take a seat if you want.  Here.  Yeah.  Fine.  And he is here with his teacher and our collaborator Tomoko Imai and we have online also Joy, a teacher at the school in Uganda where we tried to implement participatory action research which means that we brought the teachers in the design, in the research team.  So for us, educators are not only part of the end user studies, but also part of the research.  So we interact with them all the time in order to set also research questions that come directly from the field.

     So we are going to start, you can sit here.  Do you want or you want to stand?  Whichever you want.

     >> RUYUMA YASUTAKE: I will stand.

     >> VICKY CHARISI: Sure, we have three questions for you first.  First to tell us about your experience in this process and participating in our studies.

     >> RUYUMA YASUTAKE: Okay.  We have English conversation classes once per week in the school.  But we often have some problem in continuing the conversation.  With our participation in the project, we had a chance to talk with children from Australia with the help of Haru. 

     Sometimes there was a moment of silence but Haru filled the moments and made the conversation smoother.  Haru would make interesting facial expressions and make conversation fun for us.  During the project we had a chance to design behaviors and interacted with engineers which was really nice.

     >> VICKY CHARISI: Yeah.  And during the project probably faced some challenges or I mean there were some moments where you thought that all of this project is very difficult to get done?  Do you have anything to tell us about this?

     >> RUYUMA YASUTAKE: The platform is still not stable and sometimes there was system trouble.  For example, once robot was overheated and could not present so had to stop interaction and set again. 

     But overall, the experience was positive because I had a great time in talking with professional researchers for trying to fix the problem.  Being able to work in this international research, it was very, valuable experience for me.

     >> VICKY CHARISI: Thank you, Ruyuma.  And do you want to tell us how would you imagine the future of education for you, I mean through your eyes? 

     You are now in education, so if in the near future you have the possibility to interact with robots or artificial intelligence within the education, how would this work for you?

     >> RUYUMA YASUTAKE: Haru can connect many children in different partners and can be the partner to practice conversation by taking different roles, like teachers, friends, and so on.  And probably use of AI system can be more fair, yeah.

     >> VICKY CHARISI: Okay.  So thank you very much, Ruyuma.  This was intervention from one of our students.  But yeah, next time probably we can have more of them.  And now I would like -- you can probably take a seat.

     [APPLAUSE]

     >> VICKY CHARISI: You can take a seat.  Yeah.  And now probably have an online speaker, Joy.  You can hear us?

     >> JOY NAKHAYENZE:  We can hear you.

     >> VICKY CHARISI: Joy is one of our main collaborators as she is an educator at the rural area in Uganda in Abuduba.  Her school is quite remote, I would say. 

     Through another collaborator of ours we had an interaction with her initially, we explained our project to her and asked if we could have some sessions. 

     So our main goal was to include a school from such a different economic and cultural background was to see if when we talk about Children's Rights this mean exactly the same for all of the situations.  Does the economic or the cultural context plays any role here. 

     So what we did, it was to bring together the students from Tokyo, this urban area, and the students from Uganda to explore the concept of fairness.

     So we ran -- we run studies on storytelling and we asked children to talk about fairness in different scenarios, every day scenarios, and technology and robotic scenarios. 

     Joy, would you like to talk a little bit about your participating in the studies.

     >> JOY NAKHAYENZE:  Thank you very much for inviting me.  Thank you very much.  I'm Joy.  I'm an educator from school called which is in Uganda in the rural setting.

     It has like 200 students who are in the 13 to 18 years old.  Most of the students live close to the school and their parents generally. 

     The great thing from being involved in the project has been the exposure like to my students and the project has enabled our students participate and have hands-on experience that enhanced their understanding and interests in technology and other cultures, which was the first time for them to talk to children like in Japan and other countries that really was a great experience for them.

     Like additionally, a great bonus was like language learning whereby the students were able to engage in interactive lessons and see that this feedback on their language so like the language expressed themselves in Swahili and English. 

     What we think a lot the sessions were well planned and would really capture our students' attention and it had to increase the engagement in the session that we had.

     What I feel like in the opinion, what I heard was the project really enabled the social and emotional learning whereby they have skills, the emotional intelligence, feeling the compassion for peers and they enjoyed and learned about the Japanese culture.

     >> VICKY CHARISI: Thank you so much, Joy.  And if you want to tell us a little bit about possible challenges that you faced while you were participating in our studies. 

     And we didn't have, of course, we didn't have the opportunity to have a robot at the school there, so this is something that was not -- I mean we are in very initial phases where we do ethnography so probably this will be in the future. 

     But already we had other interactions and discussions with Joy.  Tell us a little bit the challenges that you faced even with the technology, the simple technology that we used during our project.

     >> JOY NAKHAYENZE: Thank you.  In my opinion the major bootstrap was the limited resources we had, of course, at the local level in Uganda and the school being at the local setup.  With the local setup that has a budget constrained, making it like difficult to invest in technology. 

     And also we find that the Internet connection was not all that stable, like we are used to with here.  And it really meant -- the online sessions were very hard to catch up with the timing.  Another issue we had was to do with the culture integration whereby we feel like there should be a need to engage from education back in Uganda and the project for additional, the time, the adjustment to teaching methods.

     >> VICKY CHARISI: Thank you.  Joy.  And what is your vision for the future?  What would you like to have for the future in the context of this project?

     >> JOY NAKHAYENZE: The funding of the project.  The infrastructure for stable Internet connection for all.  This is like a basic need for the integration of technology in the school.  And you find that school, there is no power, there is no Internet connection.  You are only using like one phone or one laptop.  It is very hard. 

     And we hope the connection of the Internet that children, we also have -- we feel like the resources and the necessary materials like the systems and the robots.  The computer equipment in the school in Japan there, all of the students had computers.  For our students to have access to information like how in Japan.  For the future, we envision like our schools having the necessary technology such as computers and robots for students but also train the teachers.

     We feel it is important for all students and teachers we hope that all of the educators have the opportunity to participate in the online workshop and training to learn about technology in the everyday teaching.  Like can you understand the session and this project was a great opportunity to our students.  And we hope that actually not only at the beginning how we started it but we will continue with this exciting project to grow up.  Thank you very much.

     >> VICKY CHARISI: Thank you, Joy.  It was great pleasure it has been to work with Joy and the school.  And thank you very much for your intervention today.

     [APPLAUSE]

     >> VICKY CHARISI: I don't know if Judith -- Judith, you are -- great.  I would like to invite Judith.  So as Steven said beforehand, this was one our project is one of the eight case studies where we tried to implement some of the guidelines from UNICEF.  Today we want also to take case from another case study.  So Judith, I need to read your short bio because it is super rich.

     So welcome to the session, first of all.  Judith is a technology evangelist and business psychologist with experience working in Africa, Asia and Europe.  In 2016, she set up an MEC3T creation lab in Lagos focused on building the African ecosystem for extended reality technology.  She is a fellow of the World Economic Forum and she is affiliated with the Graduate School of Harvard School of Education.  So the floor is yours, Judith.

     >> JUDITH OKONKWO: Thank you very much, Vicky.  Good afternoon, everybody.  What a pleasure it is to be here with you all today.

     I just want to tell you briefly about my engagements with UNICEF as part of the pilot for working with the guidance for the use of AI with children, which is really pivotal for us. 

     Before I start, I want to give you context about the work that I do.  I run MSE-3D.  We describe ourselves as a XR creation lab and we are headquartered in Lagos, Nigeria.  Our work is to do whatever we can to grow the ecosystem.  for the extended reality technology.  So augmented virtual reality and mixed reality across the African continent. 

     These focus activities include three main ways.  The first I describe as evangelization.  We do whatever we can to give people their first touch and feel of the technologies, give them access to it and help them to understand the possibilities today. 

     The second focus area for us is to support the growth of an XR community of professionals across the African continent.  We believe if we are to reap the benefits of these technologies, then we must have people with the skills and knowledge who can adopt and adapt these technologies for our purposes. 

     And then for us, the third aspect is committing our time and resources to areas in which we think there is room for immediate significant impact with these technologies for society today.  And in service of that, we do work in healthcare, education and storytelling and in digital conservation.  That healthcare piece is what brings me here today for this particular brief talk.

     So a number of years ago, in Nigeria with a partner company we conceived a project called Autism VR.  And I'll give you a bit of background as to why that is.  In Nigeria, if you are familiar with it, is a country of 200 plus million people.

     It is a country that I would say is severely under resourced when it comes to mental healthcare.  I don't want to go into the numbers in terms of providers to the population, but it is really, really worrying.  Alongside that, there is significant stigma attached to mental healthcare.  You can imagine children that are neurodiverse, the ways they are excluded from society. 

     We have conceived a game called Autism VR.  It's a voice driven virtual reality game that does two things.  First of all, it provides basic information about autism spectrum disorder.  And the second element is that after providing that information, we then have the opportunity through voice interaction to engage with a family that has a child on the spectrum and then see if you can sort of like put some of the things you learned in practice.  That's the idea.  And we are still developing this.

     So we had started on that, you know, for about a year or two when we were very fortunate to be introduced to Steven and his incredible team and the guidance of the UNICEF use of AI for children.  We spent a lot of time believing we are following a human-centred design approach to the design development with wanting to build with the commendable considerations. 

     Wanted to increase awareness, wanted to foster inclusion and support children who were neurodiverse.  But the guidance helped us shift the perspective from being just broad play human-centred to being specifically child-centred in the design approach.  We focused on three main indicators from the guide.  We wanted to prioritize fairness and nondiscrimination. 

     And the way that would typically show up in, you know, a country like Nigeria is just exclusion, right, for children who are neurodiverse or children who the general public would have to work a little bit more to understand or to engage with, right.

     We wanted to foster inclusion.  We wanted more people to have the knowledge to understand that behavior they might see might not be behavior that they should just consider sort of like off the scale and not worth engaging in.  And we really, really wanted to do all we can to support the well-being and positive development of children on the spectrum and we believe that by creating awareness we can do this.

     In the -- there will be an image on the screen in a minute.  It is a -- image on the screen in a minute.  A screen grab from the game.  An early version of it.  But I will tell you sort of what the experience is like.

     So in the first scene there is a woman called Asabe.  A woman who is in the front room of a typical house in Lagos.  You go in the room and she starts to talk to you and provides information about autism spectrum disorder.  Checks your understanding every few sentences.  And you respond and let her know if you understood or not.  If you don't, she will go back. 

     Then When you are done with that, she then says please go ahead and visit your family, friends.  So the idea is like you are then going through another door into a typical living room, the kind you would find in Nigeria.  And when you get into the room, there's a family, you are greeted by the parents.  And they welcome you and say here is our son, Tinde, see if you could come and get him to greet you and we will get you some refreshments.  They exit the room. 

     And then you get to attempt to engage with their son.  The idea is that if you are able to do that, using the tools and the tips that you have gotten from the previous scene, then eventually Tinde will not just kind of like engage with you by establishing eye contact but will actually stand up and come to you and say, you know, good afternoon aunty or good afternoon uncle as the case may be.

     When we started building the game, we were building it for the Oculus, to let you know how long ago that was.  But the idea right now is to build for the Google cardboard.  I have one here.  That's really because this is a game that, first of all, will be an open source product but is really being built for the people and being built to ensure that more people have an understanding of what autism spectrum disorders are or what neurodivergence is.  It is challenging to build for the cardboard, but we also know that if we want it to scale in a place like Nigeria where there isn't access to the virtual reality headsets, that is definitely the way to go.

     >> VICKY CHARISI: Thank you so much.  We had a small practical problems, but we will show it afterwards because we have the description.  Thank you so much for the description for your job.

     [APPLAUSE]

     >> VICKY CHARISI: Now we have an online keynote speaker, Daniella DiPaola.  It is off to you.

     >> DANIELLA DIPAOLA: Hello.  Hi, everyone.  It is a pleasure to introduce Dominic Regester who is the Director of Education for the Centre for Education Transformation at Salzburg Global Seminar where he is responsible for designing, developing and implementing programmes on the futures of education with a particular focus on social and emotional learning, educational leadership, regenerative education and education transformation. 

     He works on a broad range of projects across education policy, practice, transformation and international development including as the director of the Amal Alliance, as a senior editor for Diplomatic Courier, to mention a few.  Thank you, Dominic.

     >> DOMINIC REGESTER: Thanks, Daniella.  Good morning, Vicky.  Hi, everybody.  Thank you for the opportunity to speak with you.  Is the audio okay?

     >> VICKY CHARISI: Yes, we can hear you okay.

     >> DOMINIC REGESTER: Great.  Thank you. 

     Like Danielia said, I'm the Director of the Centre for Education Transformation, which is part of Salzburg Global Seminar is a small NGO that's based in Salzburg in Austria that was founded just after the second world war as part of a European or trans-Atlantic peace-building initiative. 

     I wanted to talk a little bit about the education landscape globally at the moment and why there is such a compelling case for education transformation. 

     So -- the beginnings of this is really, you know, predates COVID.  And there was an increasing understanding that the world and the vast majority of education systems had got into what is being described as a learning crisis.  That students in education around the world, this is particularly K-12 education, were not meeting literacy levels and that schools weren't, school systems weren't equipping students from the kinds of skills that were going to be needed to address key concerns in the 21st century. 

     There was also a growing realization that education systems had in many ways perpetuated some of the big social injustices that we have been dealing with for the last few years.

     Then COVID happened.  And COVID, schools were locked down.  At one point in 2020, there were something like 95% of the world's school-aged children were not in school.  One of the things that COVID did for global education systems is it shone a light on the massive inequalities that do exist within and between systems.  And as there was greater understanding of these inequalities, as parents were much closer to the process of learning and seeing what their children needed to do, it helped catalyse this really interesting debate that is still playing out at the moment as to whether we were using the time that we have children in school for in the most productive ways. 

     So you put the inequalities from COVID alongside the big social justice movements like Black Lives Matter or #metoo, looking at gender equality or racial justice alongside the climate crisis and the way in which the climate crisis is impacting on more and more people's lives but in a very unequal matter.  All of this catalysed the great process of education transformation.

     Last September, September 2022, UNESCO and other UN agencies, UNICEF included, hosted what -- UNICEF hosted the education summit in New York, the largest education convening in 40 years.  And the purpose of the summit was to share great practice and innovation and catalyse a process of education transformation because there was a realization that education systems may have been contributing or had been contributing to the different challenges that now needed to be addressed.  So issues of inequality and learning crisis and issues of social justice.

     There are now 141 UN Member States have started a process of education transformation and have developed plans and approaches as to what it is that they want to transform.  After the summit, an amazing organization for the Centre for Global Development did an analysis of the key themes that were coming through from the transforming -- from the transformation plans.

     So this is based on a keyword analysis of what had been submitted for the proposals for different systems to transform their systems.  So the top issue by a very long way is around teaching and learning.  There was then the second most important issue was around teachers and teacher retention, which is not that surprising.

     The teaching profession as a whole, a third of teachers leave the profession globally every 12 years at the moment. 

          The third issue was technology.  But when we dived into the technology, it isn't particularly about AI.  It is more about device deployment and access to the Internet.  And then there were employment skills, issues of inclusion, issues of access and the climate crisis.

     So there was sort of most of the top 10 and these are the issues coming from ministers of education in national education systems.  As you all know, education around the world in -- there are an enormous number of Civil Society organizations around the world that support education and education reform and transformation.  And so alongside the analysis of the key words that were coming up in the transforming education policies or approaches, there is also kind of parallel analysis of what Civil Society priorities are for transforming education.

     In some of the key things that are coming up in Civil Society organizations are around intergenerational collaboration in education, transforming education, how systems can pivot to being more collaborative and less competitive. Strong focus on social emotional learning and psychosocial support and mental health and well being of teachers and students around the world.  And then this idea of how transformed systems can contribute to more inclusive futures or address some of these long-standing structural social injustices that existed for many, many decades. 

     The reason for mentioning all of this in this kind of context to the global transforming education movement which is kind of a year in now is really to pose the question that is AI addressing these things in the right way?  Is the tech sector and people who are developing AI applications for education responding to the key concerns that are coming from the education profession?

     I think there is a very, very acute concern that as more systems spend more resources on the application of AI in education, it is also going to increase Digital Divide which is already very clear between education systems and between students who have access to AI or who are skilled in using it and understand that and those that don't.

     I think -- I'm in London at the moment because I have been speaking at a summit for the well being forum.  The theme was around human well-being in the age of AI.  The conference happened all day yesterday.  And it is a meeting of business, of education, of health professionals, of religious and other spiritual leaders and tech entrepreneurs.  One of the key things that came through yesterday was the high degree of anxiety that all of these different, all these representatives of different sectors have about AI and about the risk that AI can pose to ways of life.

     One of the most interesting quotes that came from yesterday, and I wanted to share with you as I come to the end of what I wanted to stay.  In the rush to be modern, are we missing the chance to be meaningful?  And as people lean more and more the possibility of AI, are we losing out on things that are more important in the education and systems.  I hope this short presentation or talk has been able to do is share some of the key themes or key trends that are taking place in education transformation around the world.

     I would really encourage you all if you have the chance to engage with teachers or with education leaders, and system or institution leaders to listen to what are the key concerns in the sector at the moment and how can AI be applied to addressing some of these concerns and what can that do to address the anxiety that exists around the Digital Divide or the lack of understanding of AI or the risk of how it is going to exacerbate inequalities within systems for between systems. 

     Thank you very much for the chance to speak to you all today.  I wish you all a very successful rest of conference.

     >> VICKY CHARISI: Thank you so much, Dominic.

     [APPLAUSE]

     >> VICKY CHARISI: Thank you.  I hope you will stay a little bit more with us because we have a Q&A afterwards.  Is this okay with you?

     >> DOMINIC REGESTER: Yes, it's fine.

     >> VICKY CHARISI: Thank you.  Now it is a great pleasure to introduce Professor Dr Bernhard Sendhoff who is a Chief Executive Officer of the Global Network Honda Research and leader of the executive council formed of the three research institutes in the Europe, Japan and the U.S.  The floor is yours.

     >> BERNHARD SENDHOFF: Thank you very much, Vicky, and Steven and Randy for inviting me to say a few words about what brought a company like Honda in the main of AI for children, what we find so exciting about this and how we want to go about it in the future and what we plan to do.

     Now, the Honda Research Institutes are the advanced research arm of the Honda Corporation and our mission is really two-fold.  On the one hand, we want to enrich our partners with innovations that address new product, services and also experiences. 

     At the same time, we do science and want to create knowledge for society that flourishes.  Like the two legs we stand on.  On the one hand, the scientific effort and on the other hand of bringing the scientific effort into innovations.  Our founder was very much about dreams of the future and we think about the future when I talk to young researchers I often say, you know, it is a privilege that we have in creating the future but it is also a responsibility.  And when you judge your own work, ask yourself is the future you are creating the future that you want your children to live in?  This connects us with the role of children in our research.

     For researchers it is really about the innovations our children will be using.  At the same time we have to say and Steven mentioned we have seen a tremendous success in AI and many other technologies in the last decade.  However, at the same time, we have to honestly say if you switch on the news for a couple minutes we haven't been successful in making society more peaceful or happy with the technology.  One issue is the rising alarm of rising social fragmentation and you see this in almost all societies and we see the only way to address this is to focus more on togetherness in societies.

     And togetherness, of course, starts with the children.  It is our children who can learn how to respect differences across cultures and how to enjoy diversity towards something that is maybe a very long-term dream of something like a global citizenship.  So we started thinking about how can we use AI innovations in order to empower children to understand more about each other.  And we called Target Scene C.  And Randy talked about how together with the great work from Vicky and others we have been able to actually bring this to life and use embodied technology, the tabletop robot Haru that we developed at the Honda Research Institute in Japan in order to mediate between different cultures in different schools in Australia and in Japan.  That was our first target scenario. 

     But as you can see on the list here, we envision to expand this quite substantially.  And I highlighted on the slide here in particular two extensions.  One really going into developing countries like Uganda.  And we have the wonderful experience and we heard the wonderful intervention earlier with the culture experience.  Again, a lot more different than, for example, between Australia and Japan. 

     And another extension is also into Ukraine which we know is a war zone since a couple of years.  And again, they are, of course, environmental conditions for children, for education of children, again, poses some very specific challenges.  And I think this is where again mediation and fostering understanding of each other can really play a large role. 

     And Ruyuma gave a nice statement about your experience with Haru.  And when you talked a little bit about some of the technological challenges we still have, I think, you know, I thought to myself, well, this is actually can also be something nice, right?  Because there is nothing as nice as if two people can joke about the technological shortcomings, right, of a robot.  And there is nothing like connecting in this way even across different cultures and maybe different continents.

     Right from the start, actually, the guideline that UNICEF did, I really think they did a great work on this, was kind of like a really a guidance for us when we thought about, you know, how do we have to specifically take care about AI and the context of children.  And I used two keywords here on the one hand.  Protect and support.  I think both really go hand in hand together.  It is very clear that children need specific protection.  And I think we see this in many of the data and it was mentioned that there is, of course, also an increasing experience of mental health conditions for a number of reasons.  So we need to take special care. 

     But, on the other hand, of course, there is also great support that we can give children at their hand and this is actually backed up by the data.  Children and young adults all around the world use the new technology and I have no doubt they will also use the most recent advances in AI successfully to increase things like connectivity and increase their own creativity.  Both protect and support go hand in hand.  I think sometimes also a lot of people talking about the technology without listening to those who actually are often the earliest adopters -- and those are the young adults and the children -- of the technology. 

     I think for us it is actually also quite good to more listen to those people who are actually using those things first.

     So I already mentioned about one of the starting points was use is mediation with AI, with embodied AI technology in the educational context.  However, at the same time, we also started another very exciting project about using AI technology in a hospital environment.  Generally, we are interested in supporting children in vulnerable situations.  Hospital environment is one.  Conflict disaster, flight and displacement, for example, are others.  And they share three characteristics.  The needs are children are often inadequately addressed.  The reasons is not always the same, however, the fact stands out for all three areas.

     Children, I think that is very clear, need child-specific explanation and reassurance, something that is not always possible in all of these three situations.  They often even need support in expressing their feelings and there are some very exciting projects really focusing helping children to tell others how they feel about things.

     And they still need to be children even in difficult situations like disaster or displacement.  And often they need additional trustees because parents who is the natural trustee for child is often part of that difficult environment, right?  Parents are there in the disaster flight situation.  They are in the hospital environments.  Children feel that the parents don't feel well when the children are ill so that pauses them in the situation and doesn't give them the opportunity to be a neutral trustee.  We started some first exciting experiments with a cancer hospital in Savir and we are expanding this and we are in discussions on how we can use Haru in the many different contexts possibly there and also expanding this into a second partner.

     Now I would like to come back to my first slide.  So I mentioned social fragmentation is a huge issue for us.  Togetherness is maybe one way to approach this.  And togetherness really starts in our society with the children.  And we at HRI believe we have the unique expertise on the interplay between embodiment, empathic behavior and creative social interaction.  We have seen development in GenerativeAI.  At the same time in particular with interaction with children there are severe limitations that those systems have.  And places us in the challenges of created interaction.

     We want to continue to engage with our partners to make the expertise and the advances in AI available with the benefit of comforting and connecting embodiments available to children in a number of different situations.

     And we want to do this explicitly also and really with a special focus on developing countries because they are, of course, the challenges are again slightly -- there again the challenges are slightly different.  However, these are young continents.  Africa is a very young continent.  When we talk about the future and the future education and the future support of our children, it has to be done in context with those countries as well, of course.  And they rightfully expect this.

     One last thought is also I think we have seen in the recent progress on GenerativeAI systems on how we build those systems and I think there is a huge discussion on whether this will be able to continue in this way.  And we believe that the future AI systems also has to learn interaction with the human society to share our human values also at the developing AI system.  At the moment we throw a lot of data at our systems.  And rightfully so we would never do this with our children, right?  We carefully curate our children's education.  We believe in the future that children and AI systems will also mutually benefit from each other because they will have the possibility of learning alongside from each other in a bidirectional way.  Learning values like we teach our children, values in our society, how we go about.

     Now, at the Honda Research Institute, of course, we don't only focus on AI and children.  But we have actually identified the United Nations Sustainable Development Goal as guiding stars for your development of innovations of putting AI and embodied AI technology into innovations of turning innovate through science our HRI model into something that has a tangible benefit in particular in the context of the sustainability, Sustainable Development Goal.  I would like to thank the organizers very much for giving me the opportunity to briefly talk about HRI here and you for listening.  Thank you very much.

     >> VICKY CHARISI: Thank you, Bernhard.  Thank you.  We have some time for questions.  I would like to invite the people that are here so the speakers that are here probably to have a seat here, Steven, Randy, Judith.  And we have also our online speakers.  And now it is time for questions.  So is there any questions from the audience?  Selma.

     >> AUDIENCE: Hi.  I'm from the youth program in Brazil.  I'm in the rights of humans.  And I am younger man and who bands for children’s rights in Brazil in UNICEF project and that is why for me the institutions proposals are always very important. 

     See how very as well as very pointed of the panel.  There is an interaction between AI and the mental health, but such as ISPA and have been used, for example, on Telegram possibly for mental health support which can help of children and adolescents online.  My question is who can UNICEF help in the ability AI, children and mental health.  Thank you.  Sorry my English.

     >> VICKY CHARISI: Thank you very much.  Steven, would you like to start with this since it was about UNICEF? 

     >> STEVEN VOSLOO: Thank you very much for the question.  This is an area that is crucially important for us but not just for UNICEF, for anybody working in the space of how children interact with technology and especially in the context of mental health and mental health support.

     And I don't know who -- nobody has all of the answers right now.  We know there is a massive mental health need.  There is the potential for technology to support and there is a potential for technology to also get it wrong.  Which could have very severe effects if it gives the wrong advice for sharing information that was given in a very confidential environment out.  It is a very, very sensitive space.

     I think we all need to get involved here.  We need the children.  We need, of course, the technology developers, a responsible -- as Bernhard said, responsible development approach and this is not an area that we should rush into for sure.  We need to watch it.  It is going to happen, if we get it right there is huge potential for providing support.  And I think, as I said earlier, what is really happened with ChatGPT, you know, everyone talks about that as the one thing.

     And, of course, foundational models are not now and there are other models not just ChatGPT but that is the one that has kind of become the placeholder for the whole new moment, cultural moment as not just technological but curl moment as a speaker said earlier that AI is now -- used to be in the background, the algorithm.  Your newsfeed, the Instagram photo Snap photo and it is now something that you interact with.  We don't know what the long-term effects are.  This is why we also need solid research around the impacts of children and AI as they interact on all of us but, of course, we focus on children for the opportunities and also the potential risks.  Thank you.

     >> VICKY CHARISI: Thank you very much.  Judith, would you also do work with mental health.  Would you like to say something.

     >> JUDITH OKONKWO: Thank you very much.  I was nodding as Steven was talking.  Everything he said completely resonated. 

     One thing I would like to say is right now in the world all of the initiatives happening where people are thinking about things like governance for AI and more the metaverse.  I think that we have to prioritize including young people in the conversations. 

     UNICEF, of course, does that brilliantly.  So many more organizations need to.  Every time I'm in a room where conversations have been had and the youngest people look like me, I know we have a problem.  Whatever we can do to make sure that young people are in the rooms they need to be in, we definitely should.  You are talking about getting it wrong.  I don't know if people saw but in the news recently BBC was reporting about a young man who had been arrested on the grounds of Windsor Castle for trying to kill the queen, and he had been egged on by his AI assistant to go and do it.

     So already, you know, we are seeing that we don't quite know where we are going with these technologies, but we definitely have to come together to figure out what future we want for ourselves.

     >> VICKY CHARISI: I would like to do a small rearrangement.  So you belong there, please.  It is about children.  Randy, would you mind to go to sit there so I can -- okay.  Thank you very much.  And apologies for the interruption.  Any other question.  Selma, yeah?

     >> AUDIENCE: Hi, I'm Selma from Indiana University.  It is a pleasure to see the diversity and thoughts that focus on children and their presence in the work. 

     One thing I was curious, Steve, you started with kind of saying you had developed these guidelines and you knew they weren't the end and so many different interesting things going on. 

     Wondering if both you and the folks who participate in the projects could speak to either how the guidelines were things that were kind of present and help them in the projects and or how they see their projects as expanding on or further defining aspects of the guidelines that maybe weren't already in there.  Thank you.

     >> STEVEN VOSLOO: Okay.  Thanks, Selma.  That is a really great question.  The -- I should have mentioned this earlier. 

     Guidance has been published and the eight case studies are online on the UNICEF page.  We wanted a diversity of projects from different locations but also different contexts.  Like some of the projects, one of them in Finland provides mental health support or at least -- sorry, mental health information, not support.  But where children can find information as a kind of a first point of call.

     And initial questions around potential symptoms and looking for that first line of kind of informational support.  Not therapeutic support.  But that was one of the case studies.  And that was done by an ongoing project by the Medical University of Helsinki.  And that was interesting because they had -- because it is a hospital they, you know, in a very developed nation in a sense, technologically developed and also kind of government supported they have many ethicists on their team that developed the product.

     Is not only software developers but ethnographers, researches, ethics team, doctors and psychologists and obviously did a lot of testing with the children.  We chose that.  MEC3D and also mental health support but not necessarily for the patient but for the people around the patient.  So the child on the spectrum.

     And then, for example, we did one in -- with the institute in the UK that was a really nice example how you engaged public on developing public policy on AI.  And they have gone on to while the case studies have kind of finished, the work continues.  The Turing Institute has been asked by the government of Scotland to engage children in Scotland on AI.  And what excites them about AI, what worries them.  And I think we will come up with a question on that.  What kind of future do they want?

     And so the institute and their initial reports and methodology and everything are online.  A really rich resource and that will inform policymakers as they regulate.  It was interesting for us in the end after the eight case studies, the guidance didn't really change so much, which was kind of a relief.  We thought wow, we seem to get it right the first time. 

     But it might also be because the guidance is almost at the level of principles and we do that because we are a global organization.  And so you have to be quite kind of high level or generic and then it gets adapted at the local context.  Unfortunate thing is that everybody wants the details how do you adapt it.  That is the challenge how do you move from principles to practice.  But that is where in the end we said the guidance hasn't changed that much but it has been enriched by the case studies.  If you want to learn how different organizations have applied them, then go and read these.

     I will just say one more thing.  There are nine principles or requirements for child-centred AI in the guidance.  For example, consideration inclusion of children if developing AI systems and policies.  We found in the end that all of the case studies only picked two or three and we realized that is actually fine.  In your project or initiative that there are two or three that speak more to you than others.  Participatory design, that is one thing.  Or fairness or discrimination.  In the end, only a few tend to kind of be the focus for your work.  So everything is online.

     We are really, of course, just thinking about if there is a need to update them or add to them in the light of GenerativeAI.  There are a lot more unknowns now.  We don't know how the human computer interaction will evolve over time and we want to make it work in a way that upholds rights and be responsible.  But we are everybody kind of building the plane or fixing the plane as it is in the air.  We are very keen to do more work in this space in light of kind of ongoing developments.  Yeah.

     >> VICKY CHARISI: Thank you very much, Steven.  Is there any other questions from the audience?  Yes?  Please.

     >> AUDIENCE: Hi, this is Edmon Chung from dot-Asia.  We operate the dot kids domain. 

     And something that is done here is great and something that we would like kids to take on and also help promote.  Asking personally, I wanted to ask I guess it is Ruyuma.  One of the last comments kind of gave me a little bit of a concern.  Your last comment was that maybe the evaluation or the assessment can be more fair with AI.  Of course, it could be.  But it could also be less fair, and that is part of the discussion that -- that is the heart of the discussion.  So what if it is not fair?

     And that brings me to a second question that I wanted to kind of ask as well.  I think it was mentioned that for the Uganda project, it was focused on fairness and exploring fairness.  But I didn't quite understand from Joy what was being discussed.  How part of AI was part of it.  It would be useful to get more of that.

     Because really, actually as a father of an 8 and 10-year-old, I'm quite pleasantly surprised that my 10-year-old just now in year seven have told me this September starting their teachers are actually getting them to use AI to help them with homework and being part of the curriculum.  It is exciting for me.  But also, you know, because we know that technology is not entirely neutral especially when we talk about these things, it is a symbiotic relationship. 

     I wanted to really hear from the experience you had at ending, you know, remark about fairness.  And then, you know, how is -- how AI and fairness really works in the response from the case studies.  Thank you.

     >> VICKY CHARISI: Thank you.  Do you mind if I get a question because I did the study on fairness.  Is it okay with you? 

     So indeed, the talk about by Joy was focused on something not on the specific study.  We published scientific publication on this and we can share the links later.  The main research question for the study was to understand if there are cultural differences in the perception, the perceived fairness.  We wanted to see how children in these two environments with the culture but also economical differences they had they would focus on different aspects of fairness.

     So what we did, we provided different scenarios.  The whole activity was based on storytelling frameworks.  And we let the kids talk about these scenarios in their own words, their own drawings, et cetera.  And then I had some researchers analysed the systematic way this data.

     And what we found was that, indeed, children in Uganda focus more on aspects of fairness that have to do more on the material aspects.  So they would talk more about how, for example, something was shared among children, et cetera.  While the children in Japan would focus more on psychological effects.  So, for example, they would talk about behaviors of teachers.  Or they would talk -- so this was -- this just an example to see how the priorities, probably when we abstract the actual notion on fairness doesn't really show a lot but when we go into details the different cultures prioritize in different ways.  That was the results of our study.  Of course, it was only the starting point and there are a lot to explore.  Not only us, there is a huge community of development social psychologies that explore that topic.

     To the first question, do you want to repeat the first question.

     >> AUDIENCE: You mentioned at the end if I understood you correctly saying that assessment maybe of your work through AI might be more fair.  Tell us more a little bit more about it.  What if it is not fair?  How do you know it is not fair?  What if you trust the machine, too?

     >> VICKY CHARISI: Is there someone -- Judith?

     >> RUYUMA YASUTAKE: I think -- I think some school teachers have -- individual?  Individual -- individual aversion sense.  So teacher aversion sense -- how do I say.  Judgment is not in our -- in equality.  So I'm not -- I guess AI can fair aversion.  Yeah.

     >> VICKY CHARISI: I mean apparently there are some hopes here, right?  So I don't really believe that, you know, there is like this nobody believes that it is like fair, right?  Absolutely fair evaluation with AI.  This is true.  But probably from young students, there is a hope when they see their systems or their schools evaluating in different ways and probably the experience a little bit of human unfairness, probably they put a lot of, you know, some hope on AI, but, of course, this is something that we really, really need to take very seriously.  Yes, please.

     >> AUDIENCE: Hi.  I'm from South Africa and Zambia and this is not a question.  It is more of a comment just listening to the discourse. 

     There is a concept that we use quite often in South Africa.  Progressively realizes.  When we speak with AI at the stage that we are globally, your question is quite important.  You know, what is fairness?  What are the assessments?  What is the criteria? 

     And you quite correctly put in different geographies and instances and even in the same locality based on various factors that concept of fairness is so subjective.

     And AI gives us objective almost element to the subjective things.  The question on fairness really does veer off to algorithmic biases that we do speak about that I think is also very pertinent for the conversation where the more data we have and the proper data that we have based on your comment on this context, this context, this context, we develop, right.

     So I think the answer to the fairness question is we are progressively trying to realize that.  And I think we are at a really infant stage when it comes to that.  And hence, you know, the data conversation is quite important to pair with this one.  That's just maybe a summary.

     >> VICKY CHARISI: Thank you very much for the intervention, indeed.  I am afraid we are running a little bit out of time.  Now I would like to give the floor to our online moderator, who is also our Rapporteur.  So Daniella -- can we have Daniella on the screen -- is going to give us her view of the conclusions of this workshop.  Daniella.  Yeah.  Please.

     >> DANIELLA DIPAOLA: Hello, everyone.  Thank you all for your wonderful comments.  This is a very productive discussion, and I really think that the different perspectives added a lot to the conversation. 

     Two key takeaways and two call to action points.  The two key takeaways:  The first is that despite the challenges in terms of infrastructure, in our activities for AI and Children's Rights, children from underrepresented cultures and countries should be included.  It is urgent that we consider the needs and interests from all children and not only those from privileged backgrounds. 

     Secondly, the project is not only the first step of responsible design of robots for children and various communities can contribute to the expansion such as adding to the rights for explainability and accountability and AI literacy for all.  Formal education is power and industry experiences with responsible innovation can be a catalyst for the well-being of all children.  I would like to share some call to action points.

     The-- first is that expansion to additional contexts such as hospitalized children and also formal education with the inclusion of schools is very important such as also adding the under representative groups of people such as those from the Global South.

     Secondly, there is a call for the necessary infrastructure and technology development that will give all children equal opportunities in an online world.  We need to ensure that AI opportunities come together with responsible and ethical robot designs.  Thank you.

     >> VICKY CHARISI: Thank you so much.  It was really good.  And I think it is time to close, Steven.  So the floor is yours.

     >> STEVEN VOSLOO: Yeah.  Okay.  So firstly, thank you very much.  I think the -- one of the key takeaways this is the beginning of the journey, we shared what UNICEF and partners have done, many others that have not been mentioned as we try and work out how children can safely and in a supported way engage with AI. 

     The reality is that while we sit here and debate the important issues, children are using AI out there and it is more and more every day.  It is urgent.  Everybody needs to get involved.

     Thank you for raising the data issue.  It is really critical.  And to Daniella's point, we have the challenge of data where the datasets are not complete.  They are much more kind of Global North.  We need data from children in the majority world, I like the term that is being used here and the Global South.  We know that data collection at the moment doesn't happen responsibly so we need to tick the two boxes at the same time.  The journey is going to continue.  Please work with us and we will work with you.

     And we need to work -- I mean we keep saying this, but it really is critical to work with children and to walk with children on this journey. 

     So Ruyuma, thank you for being here.  And thank you for being involved in the project.  We recently engaged a digital policy specialist from Kenya who could easily have been on the panel and she was making the point about Africa being such a young population and how crazy it is just seeing more and more how older people like us, sorry, I'm speaking for all of us here and taking the liberty, are regulating a technology that we don't really understand.  And that so much use by a generation that is going to be so much more impacted by it and we are not having them at the table.  That was really well put point.

     So for all of us here who do bring children to the table, well done and please continue.  Thank you.  Thanks, Vicky.

     >> VICKY CHARISI: Thank you very much.  Thank you to all for the support.  Thank you for being in the session.  And I hope we can continue this work on AI and Children's Rights.  Thank you.

     [APPLAUSE]