The following are the outputs of the real-time captioning taken during the Fourteenth Annual Meeting of the Internet Governance Forum (IGF) in Berlin, Germany, from 25 to 29 November 2019. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid to understanding the proceedings at the event, but should not be treated as an authoritative record.
>> MODERATOR: Hello, everybody. We'll start the session now. We would love to have your thoughts on artificial intelligence SDGs. Hello, everybody. How are you doing? A long day. Okay. Today is our session on artificial intelligence for the sustainable development goals or artificial intelligence readiness for the sustainable development goals and what we hope to have is some breakaway sessions in which we solicit from you the IGF attendees what would be opportunities to using artificial intelligence to achieve the sustainable development goals, what would be challenges for artificial intelligence achieving the sustainable development goals, what are the harms of -- possible harms from artificial intelligence and sustainable development goals and then how we would measure progress towards using artificial intelligence for achieving the sustainable development goals. Could we have someone from the audience explain to us what the sustainable development goals are? Does anyone want to? Who doesn't know about the sustainable development goals? Okay. So when you break up into the sessions, I'll put the sustainable development goals up there. We have 12 or 17 sustainable development goals, and the IGF is themed around them. So first I want to introduce the panelists. They will be responsible for facilitating discussion but I want from each of the groups someone to volunteer to Repporteur. So we'll Repporteur on the piece of paper or by following one of these two links. We'll have table one here, table two here, table three on that corner, and table four on that corner. I would just like an introduction from each of the panelists and their perspective on using AI for the SDGs. They'll either be answering one of the four questions provided or, yeah, offering a perspective on what is honestly quite a large topic. So we'll start with Greg Shannon from Carnegie melon university. Famous for computer emergency response teams and Greg, you are a cybersecurity expert, is that correct?
>> Yes, correct. Thank you. Yes, I am tow chief scientist at the certain division at the software engineering institute at Carnegie melon university. As noted we started cert back in 1988. The first multi-platform international incident that needed a coordinated response. So you might wonder why I'm talking to you about artificial intelligence. Yet another technology that is taking the world by storm and there is opportunities for it to do a great deal of good. And as you know we're a leader in artificial intelligence and in terms of developing the technologies and applying the technologies. Relative to the SDGs, one of the key aspects born out in one of the earlier sessions this morning. Artificial intelligence is an advanced capability. It requires connectivity and capability as a technical level to understand data at some rude meant tree level. So it challenges digital literacy. If there is not digital literacy you'll have a difficult time enabling AI to have an impact other than in terms of what a multi-national or large government might be able to do.
What we see where AI really takes off is when there is good easy to use tools. And the open source community has done a good job so far. Still does require a great deal of technical competency but also the best place to come to an initial understanding but also then to incorporate protections and concepts to say protect data privacy or techniques for insuring fairness. And so that's part of the strengths of that open source approach in part because it is a multi-stakeholder approach in terms of what contributes and who uses those platforms. So there is no exclusion as to who can access it. I think what -- at cert what we understand is there will be vulnerabilities, weaknesses of a different flavor, if you will, than we've seen in the past relative to cybersecurity. This is coming through on the notion of harms. There is general approaches that have particular weaknesses that can be easily biased and manipulated and can easily be mistrained. Part of the work we do to help clarify and highlight those, the resource community has been doing a great deal of work in this area to try to improve the quality of these algorithms and their implementations. I'll close in just saying that, you know, supporting the SDGs requires data when it comes to AI. I think that's really going to be the key aspect of who can put their fingers on data, and AI, as we've seen, can actually do some interesting things with a little bit of data especially when there is a well-trained model or reasonably robust trained model for helping classify. You can imagine a farmer wanting to quickly identify what sort of insect is attacking their plant, taking a picture of the leaf and being able to identify that. It's similar to applications where you take a picture of your hand and identify -- get a quick diagnosis maybe you have skin cancer or something that you should go get checked. So the ability to develop easy to use applications I think can have a huge impact on something even as important as zero hunger. Thank you.
>> MODERATOR: Thank you very much. We have Donggi Lee. You are from the South Korean youth IGF, is that correct?
>> DONGGI LEE: Not yet.
>> MODERATOR: Technical community and he is contingent from Korea. Could you tell us a bit more about yourself and about AI for SDGs?
>> DONGGI LEE: My name is Donggi Lee. I'm from South Korea. I'm currently a committee member of the Korean government alliance to stands for Korean conference forum. The first youth session by myself in this year. Also if I may graduate student majoring in the industrial engineering specialized in software engineering. I have to understand the difference stakeholders and why I would like to suggest that we do treat artificial intelligence as a tool rather than a complete technology for sustainable development goals for this time. Technology is rapidly developing artificial intelligence has made -- -- what we have to consider as a policy side. Mostly artificial intelligence is well-known for magic like making video or the image for only one second but it is all about finding pattern including text and structure of unstructured data. Artificial intelligence could be a useful tool to collect different ideas which single manpower cannot cover in terms of raw data from all of the work especially online.
In the past it was -- interpretation depending who collect the data and how they understand. For example, we can use data in social meet yeah data to understand what things interest people and automated collecting data could be a resource to interactive -- for decision making. As AI is not almighty technology this is my point. It is a great tool to gather the voice of people all over the world. In the past, new policy especially related to SDGs were applied all at once. However, we can customize a policy and adjust depending on variables such as the level of understanding and infrastructure, geopolitical issue and so on. During the dynamic usage of AI I believe we can find effective methods to implement the SDGs with broader coverage and increased productivity. Thank you.
>> MODERATOR: Thank you very much. Next is my colleague Raymond from the regional academic network on I.T. policy. Raymond, I and Sarah Kiden are working on artificial intelligence in Africa. Raymond will introduce himself as well as his perspectives on AI for SDGs.
>> RAYMOND OKWUDIRI ONUOHA: Thank you. Alex, I will move straight to my opening thoughts with regards to artificial intelligence readiness for the SDGs on what that will actually mean for -- from a development perspective especially for Africa. While we know that in the AI era brings with it the data revolution that is critical for promoting and achieving the SDGs and help in different areas to also measure progress based on analysis of the data, the increased data. That we can know how far or how well we have gone in the SDGs and know who has been included and who has been excluded from the process. So that's a critical imperative for AI within the SDGs context.
Having massive amounts of data it's helpful for countries to plan, to design and implement the development of public policies in general. So for the developing countries this presents some impose abilities for the Industrial Revolution with regards to critical societal issues that have been across the region. Artificial intelligence can provide for structure and transformation and position them for some form of competitive advantage when it is done within the context of proper institutional governance.
What is critical and what does that mean for a region like Africa to be ready to harness the potentials of this technology? I would just like to highlight two critical aspects. One is on the soft part, the software and the other on the hardware. The soft part of it will require the redefinition of the principles, the norms and policies for data governance in the digital era. Especially with regards to the traditional instruments that we have used to govern these processes before now. Then there is also the need to restructure the institutional configurations for sustainable development and governance as closely related to the capacity to adopt new rules and to adopt international structures to govern data and AI and their impact on our lives and rights. This would involve restructuring the data architecture systems and trying to make them more open and making sure there is available data that is interoperable and digitally transmittable in formats that can be used to improve to make an actual interventions and difference critical that the SDGs hope to improve. So although data and policies -- in most developing countries in contrast to perhaps Europe with the GDPR, concerns around biases and consequences for rights as to assistance within the region with the increase in development of this technology. So when we talk about structure equality and non-discrimination and something flagged by civil rights groups and then there needs to be -- is vital to recognize that AI governance will require a more comprehensive outreach that belongs to the data. Not just personal identity but also as an economic resource. Considering the new African continent agreement regime within the region. It is important to highlight security and understand that this data can be a critical asset to leapfrogging the region with regards to its digital economy. Those are some of the critical aspects that Africa or developing regions should look at if we want to say that we are ready, that we're prepared to harness the abilities that AI would help in achieving SDGs. Thank you.
>> MODERATOR: Thank you very much, Raymond. Sarah Kiden also a colleague of ours at the regional academic network on IT policy and ITP is a Ph.D. candidate at Dundee university. What is your discipline again?
>> SARAH KIDEN: I am a Ph.D. student at the University of Dundee. Between. We're trying to advocate for connectivity that are secure, open and trustworthy. So I'll be focusing on Internet of Things for communities and neighborhoods. Thank you.
>> MODERATOR: Could you offer us a perspective on AI for SDGs and maybe if you like the Internet of Things angle why it's important for AI? I'm surprising Sarah because I asked her to help moderate. We are going to do breakaway panels now for let's say half an hour and then we can report back and have discussions with the panelists. Anyone here who really hates breakaway sessions? Anyone here who really hates breakaway sessions? No, okay. Sure. I'm sure everybody does.
>> It's a matter of what's most productive for you to achieve your goal.
>> MODERATOR: The breakaway sessions, then the report writes itself. What we really want to is your input. Maybe we have a vote. Breakaway sessions or are we going to open it to the floor now? Who goes for breakaway sessions? Hands up. Opening it to the floor? Okay. So shall we just open it up then. Okay. I'll ask --I think that might be an issue. Last year we were too big. Now we're too small. Okay. Sure. I'll ask you all to introduce yourself briefly when you would like to make your question or input and I then would ask -- explain who you are, stakeholder group and then we can dive into it. So we'll first deal with opportunities for harnessing AI for the SDGs. Would anyone like to chime in on an example from their home country on opportunities for AI and SDGs?
>> AUDIENCE: Do you want a Repporteur for each question?
>> MODERATOR: If anyone wants to take notes it's fine. We'll be Repporteuring and integrating our report.
>> AUDIENCE: My name is Alberto, co-founder from in the startup based in Berlin called Hadera Sustainable Solutions. We do impact monitoring which means that we cooperate with local institutions and collect data from poor households to draw first a baseline identify basic needs and address these needs through projects that are then executed by these local institutions. And then we aim to be able to track on the advancements identified in the region. So we do use AI. We use it most of all for clustering in the moment to identify common features in a very region-diverse dataset of thousands and thousands of variables.
The thing that I want to underline here is that the input or the fuel for AI will always be data. And if the focus is sustainability and the only way to make sure that this actually is addressed is by asking the right set of questions. Or collecting the right kind of data in the location itself. I am not very keen into addressing development in regions based on aggregated data, or data obtained by statistical observations. But rather the data should be collected locally, should be also time framed because, of course, conditions in the place changes in time and that is precisely what happens when any measure is taken. So my point here is data will always feed artificial intelligence and intelligence of this AI will be based on what we feed into the machine. Thank you.
>> MODERATOR: Can you clarify what you said then rather than data obtained from statistical observations? What's the difference between data obtained from a statistical observation and an AI?
>> AUDIENCE: Raw data or non-aggregated data is data collected directly from the source. So if we are addressing issues relevant to a population, we should ask the people. We should ask, or we should measure conditions from the environment. You might be interested in information that we can connect through IOT, for instance. But then it has to be from the location itself. If we assign this task to -- to having a dataset based on statistical observations from random individuals in an area and then we draw inference to say everyone should more or less be in a similar condition, we're not addressing the individual, we're not addressing the problem that people are living, but the vague observations which aren't closing the data gap between knowing what the people need and knowing what we can infer from high-level data. Then I see a great value in the opportunity through IOT, for instance, or through local institutions that have direct contact and exchange with people from the region to rely on data with quality, high quality of content to address the issues of sustainability.
>> MODERATOR: Thank you very much. Do -- does any panelist want to comment on what's been said?
>> I'm curious, what type of data. What's the application?
>> AUDIENCE: The kind of data we collect is based on surveys that are directly asked to the people in these regions in development. And then the surveys are drawn from the SDGs themselves so it is possible to -- to develop surveys which are aligned with the SDGs. So for instance to mention an example SDGs number 7 is energy access for all. And there is a tool called the multi-tier framework which addresses different attributes to describe energy conditions in -- at the household level. Then using this latest methodologies to collect data aligned with SDGs, then can be a reliable source of data to then be able to analyze that at a level that AI can actually be a tool that addresses the SDGs.
>> MODERATOR: Thank you very much. Do we have someone else from the floor? We're still talking about opportunities for AI or SDGs.
>> AUDIENCE: Regarding the previous floor participants, the comments, I would like to share my experience. Basically he collects some data from the survey that he thought were other things. However, a lot of people have misunderstanding about the AI that it is something that we can get -- however, there is a common mistake, misunderstanding. So that is why if we want to get very useful research from the data using the AI, I think it is really important to set up the basic -- the automation process. I mean, actually -- that means that even though we have plenty of raw data but if we don't do the pre-processing and those raw data will be useless. That's why when we adopt AI to big industry it is really important to set the goals in how we use and how we collect the data. So after we collect the raw data and we have to set up these plan how to be used. So according to my experience, I also have collect some raw data from the manufacturing industry. However, I was really surprised at the raw data wasn't that beautiful. That's why a lot of engineers have to spend long, long time to do pre-processing. So that's why if we want to bring the AI to the industry-rich, with the -- we have to understand and we have to -- we have to understand how AI works and then maybe we can bring it -- (inaudible)
>> MODERATOR: Thanks very much. You talked about data preparation. So does anyone in the room have experience with data preparation or would like to explain what it is and maybe the human element to it? I think there may be some potential in AI for SDGs in terms of the huge workforce that's required there.
>> AUDIENCE: I'll be glad to comment, having dealt with lots of data and I'll take somewhat of maybe contrarian approach. In my view AI is a fairly gritty -- at least the machine learning, deep learning is a fairly gritty approach to making decisions. Yes, you can have bad data and it can distort your decision. But usually it's in comparison to what's the alternative in that context? You are using the AI to help make a decision so even poor AI may be better than the alternative that you have. That's part of what hack-A-Thons and such teach people is you can quickly get a certain type of information or suggestion at least. You can as part of a decision that you are trying to make or a conclusion you are trying to draw. And understanding the nuances is important. I prefer techniques that actually deal with dirty data, messy data mainly because it is, like you said. Labor-intensive if you require well-labeled data. So I think that's part of -- there are options out there. There is not necessarily a one size fits all.
>> MODERATOR: Okay. The people behind me might have to ask me to turn my head around if you have any questions. Does anyone else want to -- yep.
>> AUDIENCE: Thank you very much. My name is -- I work for the German development agency in the fair forward initiative. Artificial intelligence for all. We were successfully launched this morning. So rather new.
Thank you very much. I have two questions. The first one is rather simple. Who is developing and applying and deploying AI for SDGs? It draws inspiration from a remark earlier today which came from a private sector representative from a social business in Africa and he said that on the demand side for data he has experienced it mostly came from the government and only to a limited extent from the private sector. And if anyone wants to comment on that I would be thankful. The second question is where do you see the role of official statistics in this whole topic looking to the rise of microdata? Especially in western countries, officials statistics heavily moves towards microdata and away from the broad national aggregates which basically defined national statistics for decades. That gives suddenly new datasets that you can try to apply on and I also have background in central banking and they're already doing it. Thankful nor any comments on that.
>> MODERATOR: Could you just explain briefly what you mean by microdata?
>> AUDIENCE: Sure. Basically every data which has at its unit of obsolation an individual unit, a person, household, firm or small geo graphical unit like quarters, villages, small districts, etc. That would be my definition and it could even be down to level of single transaction might it be commercial or another kind.
>> MODERATOR: I haven't gone deep enough but I'm suspecting there was a similar excitement that happened around statistics around 50s or 60s what people called the quantitative revolution and national development. I think some of the promises of AI as we know it is actually statistics or supplementing statistics. Does anyone want to talk to statistics? Any statisticians in the house?
>> AUDIENCE: Sure, I have a background. I think the microdata thing is interesting because the -- you know, it turns out the health data you might collect on your phone or fit bit in some sense to the extent it is telling you about your own performance, you know, what makes you sleep well, for example, or how does your activity during the day affect your sleep? That's the sort of microdata, would you agree, that's a microdata application. Where you don't have to have a huge database, it is collecting data on your phone. At the end of the day deep learning is statistics. So, you know, but it has a very nice name and you have robots and everything that make it much more glamorous or insidious depending upon your point of view. But it usually does come back down to statistics. I'm not sure what the question is in terms of statistics.
>> AUDIENCE: I think that was the question and statistical capacity of governments is hugely important. So when we are stimulating AI we're also stimulating that capacity.
>> RAYMOND OKWUDIRI ONUOHA: The question wasn't that very clear. What you explain seems to point to what imperatives are for macro data, not micro. I think in relation to the SDGs, macro data will be very critical to measuring and quantifying potential or chart progress using AI watching the SDGs and imperative to improve statistical capacity especially for developing countries. It is estimated for this to happen, there is a requirement of a billion dollars annually to enable the world's lower income countries to establish the statistical capacity supporting their measure and goals. How this money and funding will come about is not still very clear. So that's a very big bottleneck in preparing or being ready to harness AI for the SDGs. When we look at the critical macro data. We understand that administrative data are the main sources of data I use to inform this process especially considering the U.N.'s 2030 agendas well as the agenda of the African Union with respect to the SDGs. We know the U.N. recommends that it should be done at least every 10 years for census for better statistical results every five years. Not simply relying on estimates and projections alone like someone alluded in the previous statement. We want to look across Africa, a few countries haven't carried out a census since the 1990s. Nigeria had the last one in 2006. It is important time prove this process especially in regards to the segregated data that can measure and show disparities with regards to age, income and geographical location. This is important to insure an accurate assessment of the progress made in these various agencies and the gaps and issues of exclusion and inclusion in this regards that can be addressed by the authorities. So I think that's imperatives for macro statistics and being prepared to harness AI for the SDG, thank you.
>> MODERATOR: Thank you. In terms of time and evolution of the discussion it seems like we've moved on to constraints. So we are going to now talk about constraints on AI for SDGs. As -- you have just joined us, would you like to mention some constraints that exist in the developing world in terms of realizing AI for SDGs?
>> AUDIENCE: So I think the very first thing if I had to weigh in on that question, probably that pops into my mind what Sarah had actually said during the prep call, the prep meeting we had before the panel on Tuesday night you were saying something about the -- where all the interesting -- that was I commented and I said that kind of reflected one of the comments that was made in the keynote opening saying there are some really amazing, interesting pockets of innovative ideas and applications of people who do some reinnovative solutions in Africa. The problem is or the question was how is it that they can gain access to capital, broadband and so on so they can scale. So the question on whether collecting data or trying to build applications and building infrastructure can, you know, capitalize on some of this innovation and capitalize on the data, can deliver on some of the SDGs, one of the constraints I see in that is not specifically to how we architect the regulatory, you know, space around data and collection and filtering and cleaning up and all of that. One of the things for -- to deliver to use in that respect is how we're going to create infrastructures where people have access so they can scale, so they can productize or get an environment and sell the products that they have, essentially. If that's with a the question of constraints that would be the first thought that comes to mind.
>> MODERATOR: Sarah, feel free if you want to add more on that thought because I thought it was really interesting and I was hoping that on the panel when we had the discussion yesterday it would be a point of discussion. But -- thanks. Would anyone like to pick up on the infrastructures or new constraint?
>> AUDIENCE: I would say a natural constraint by definition of AI is the fact that AI will perform whatever the AI is meant to do. Then if there is an algorithm designed for a specific industry, then probably the consideration will be to perform a specific task within this industry and then the AI could be very precise or very powerful in doing what it has been meant to be doing. But then in terms of sustainable development, there is an environment and there are some surroundings that are those that will be at the end affected, but by whatever decision AI has taken without considering the environment. We as humans perceive the environment even though we want to buy a T-shirt and then we heard there is the possibility to buy environmental and social responsible cotton, then we might change the decision to change to a sustainable supplier. If the algorithm that will throw to me the alternatives to buy my T-shirt is not considering anything that addresses sustainability, then I will not have the opportunity to address my concern with the environment. Then I think if we are leaving decisions to be taken by AI, these decisions should include all these things that we humans are concerned of such as sustainability and development and well-being. So even if a process has nothing to do with sustainability as the direct process, a recommendation system for cotton T-shirts, at the end it is related and then somehow there should be a policy that says if you have a machine that will recommend products and there is an opportunity to address sustainable aspects through the data that is fed into the algorithm, this information, this kind of data should be available also into the algorithm so this artificial decision making is also loaded with information about sustainability.
And then my point with regard to limitations is that AI is as intelligent as the data we feed into the algorithm and then we have to be responsible in choosing the right information and to shape the AI to do that what also is related to the sustainable development goals. Thank you.
>> MODERATOR: Thank you very much. Try to get you verbatim. I think you pointed to two issues and the first is AI is quite brittle. You can't just bend it towards any purpose. The second is the data we feed it. So much of this has profit motive despite what you say the recommendation system might not feed it. I was sitting next to in the UNESCO panel someone from a youth hack-A-Thon or youth IGF contingent. She made a recommendation system specifically for sustainable products whether it had the logic that's going to make somebody money, I don't know. Hopefully that could be combined. The person there.
>> AUDIENCE: Hi, among other things I'm also working group chair at one of the standards. I think you made a very good point regarding the AI systems are basically they are going to optimize -- optimize toward a particular goal that they've been present constructed to do. The definition of what it is trying to achieve is something that has to be pre-set for it. The AI system can learn things from the data but what it learns is how better to reach that optimization goal. The goal is defined by people. So what this -- what I'm trying to point out here is it's not just the data that feeds in, it is also understanding of the context in which it's going to be used so that you understand properly what it is that you need to be optimizing for. For instance, the example that you brought about going to the market and you identify wait a minute, there is actually something else here that would be better to do. This requires on the ground capacity to be able to adjust the optimization targets of the AI system. What that means is you cannot rely on systems that are parachuted in and say it will learn from the data and will adjust to the local situations. You need to make sure that you have local capacity and that the system is built in such a way it isn't a closed box that actually things like optimization targets can be adjusted according to the situation. So if we're talking about possible constraints on the use of AI in SDGs purposes, that means we need to always be thinking about also the capacity building for the people in the area. So that they can actually take agency of the use of these technologies.
>> AUDIENCE: Hi, I work for the south African foreign ministry and deal with issues of science and technology. I'm just reflecting on the title of this session and I'm listening to a lot of what you are all saying here. And also just considering the aims of the SDGs. And what I loved about why I chose this session was we talk about dynamic partnerships and I almost feel like you all are experts in the field of AI but speaking in echo chambers. My biggest concern is always that from a governmental perspective and those outside of government, we don't seem to understand each other. Precisely because we don't know each other's languages. And for me the key is I've been fortunate that I came from the corporate sector into government. So sometimes I'm able to crawl back into that spectrum and understand where you are going. But understand that your field, particularly around AI is very technical. And somebody in policy development it becomes too much for me. And so when we talk about the development of these partnerships we need to find a mechanism to use AI to help us understand each other.
This is the intervention I was hoping to discover here today. And I'm hoping that this is where we'll end up with the moderator. Thank you.
>> MODERATOR: Donggi.
>> DONGGI LEE: Thank you for the comment. I want to add one thing about the AI as well. I think the most important thing is the uncertainty of the AI because like when -- while we are adopting learnings with other AI things in the real field and I realize that sometimes I really get the value that I really want to get in advance. And recall that as an optimal value. However, even though they use the same method of the AI package or the AI method, however, the weight we put, the value depending on the weight of the value the result will be totally different. In that case I can make a question then. So what is the optimal value? For example, there is a lot of -- there are a lot of points we have to consider it on the SDGs. However, if there are different stakeholders and like for example part A and part B. Can A and B think in a different way. In those situations nobody can tell the optimal value, what the optimal value is. So that's why even though we put the AI into real life but we can give -- the metric value. However, nobody knows where the value came from. That's why a lot of the academic researchers are conducting such as XAI, explainable AI. However we have to make a question as to why the result can be believed or not. This is my point that depending on the people who make our architecture of the AI the result can be changed. That's why we have to consider within the model and we have to share the opinion, what is the optimal value and what is the correct method to apply for the AI? Thank you.
>> MODERATOR: Okay. Before we return do we have any new speakers in the room? No pressure. Just mandatory call for --
>> AUDIENCE: It's me again, thank you. Talking about optimum value. The first step is having value at all. So if -- it already happens in policy. You can think about driving a car and you want to get somewhere as fast as possible and you are looking to some Google maps thing and it traces you the route and then you have speed limits. And then why are there speed limits? You would like to go a lot faster. You want to gain time. But then the speed limits say you can't go beyond because there are some safety issues. There are some environmental issues or noise issues. And then the optimum value here is that you as the intelligence before AI we will drive and one day the car will drive and we'll know in this area I can drive up to 40 kilometers per hour and then it's out of the question. I have then pressed the accelerator, I come up to the speed, and then I can't go any faster. Then which is the optimum value, the fact that something was established? And then the rest is left to the automation or to the other goals set in this big algorithm. So that was also my point that I said before, that for decision making, there should be something also that first asks the question related to the environment and then what should I be thinking about if I am in industry and I use water? And then what happens with the water in my area? What happens with the water of my population? I am using the drinking water of my neighbors. Am I affecting the health condition of my neighbors because I am optimizing to increase productivity and then that makes I take certain decisions now all this happens with humans but then if the same approach is then brought into a machine, then the machine will continue to do the same errors has humans have been doing for many, many years. If we are going to leave this responsibility to AI, then we should take into consideration that precisely there are 17 SDGs that should help us build a framework to keep in mind that there are external elements present in the environment that are also relevant to the decision making.
>> MODERATOR: Okay. We have 20 minutes left. I think that's a very good, important point about values and the 17 SDGs and you can optimize for one to the expense of the others. Maybe we want to move here onto the risks of AI or potential risks of using AIs for SDGs.
>> AUDIENCE: Hello. So actually my question may take us to a little bit broader scope I would say. So a few years ago if we had this session like five years ago we would probably say what Internet could do for the developing codes and somehow we still didn't finish the digital transformation everywhere probably and I'm thinking what are the challenges that we face and that we probably could learn some lessons from. Personally coming from Africa myself I would say one of the problems is actually that access point of digital transformation we tried to give people solutions without consulting them. For my opinion one of the problems. I would love to hear from you what do you think the lessons we could have learned from the Internet connectivity, from the digital transformation that may be the fate of AI would be different?
>> MODERATOR: Thanks very much. Tell us your name if you like and anyone who has spoken can drop me off a card and clarify the name for attribution.
>> This is -- I'm from Tunisia. A software engineer from Tunisia.
>> MODERATOR: I think we do have a lot to learn from the history of ICT access and digitalization and at research Africa we've observed the digital inequality paradox. The more people get connected at least from the high level like mobile phone penetration, subscriber numbers, there is more digital inequalities that get introduced. And I think this happens with any wave of development.
>> AUDIENCE: I'm from Finland. I think the key point in what she just said is that we don't really know. We are talking about what AI, what it is today or what it was last year. And we have no clue what it will be five years from now. It may be completely different. I don't think expect to have real general AI by then but it rather more general than it is now. Maybe something unpredictable. How much it will affect what kind, it is rather dangerous, I think, to make rules and too tight predictions on what we have now. We are speaking about very narrow AI what we have now. Likely to be already much less narrow a year from now let alone five years.
>> AUDIENCE: The one lesson we can take away is user-centered design is part of the success of connectivity especially in, you know, the most recent billion that have connected. And I think to our colleague's question from South Africa, it is about being user-centered design. What is the problem you're trying to solve and what's the decision you are trying to make better? And if you don't answer those questions, then you don't necessarily get user-centered solutions. And I think user-centered design is so key to making AI effectively especially for the SDGs.
>> MODERATOR: Thank you very much. Any other risks, potential risks or considerations?
>> AUDIENCE: Hi, so I would like to build on what was just mentioned around user-centric design that connect us to the point you made regarding before optimizing for one SDG we might be counter itemizing and destroying another one. Which is one of the aspects that comes with something like AI as opposed to previous digitization things is that it is something that in a sense can happen remotely. Or you may not be aware of the fact that an AI is being used in a way to do an optimization on you. Because it is using data in a particular potentially computer remote location to make decisions about services perhaps that are being provided to you. So if you don't engage in user -- in a design methodology that actually involves the people on location, you may be prioritizing things other than the way in which they would like to have the prioritizations. Which might happen less in the case of previous technological work that has been done that involve more of a hardware application at that location where basically the people in the area will at least know that this is happening.
>> MODERATOR: Thank you. Anyone have experience or anecdotes about optimizing for users in the location? I know in our studies in AI in Africa we actually are trying to pick up some examples of it. A lot of the cases and literature about bias is based in cases from the north and from the U.S. My fear is that this time next year we'll have a lot of cases and we should get on it now. Donggi.
>> DONGGI LEE: I think one of the reasons that -- I think one of the risks can be the responsibility. When humans are doing something, each are assigned the person in charge of the work, so if any situation happens and the responsibility will be up to the person. However, when they gather the data or learn from AI, we will really on the -- who take the responsibility? In the case of we have to consider who can solve the problems as well. Among the SDGs, we have to consider that there are a lot of the different stakeholders. So one might be a decision maker, one might be an end user, one might be of maybe like the people from the government. So we have to take -- we have to think that if we decide to bring the technology -- we have to bring the AI to the real world, especially focus on the SDGs and we have to think about the -- who will take the responsibility, because when some bad situation occurred and for the better solving the problems and we have to find who will take the responsibility. So if nobody can take the responsibility and it would be very dangerous, especially for any stakeholder who really need high -- what can I say. For example, like a bank or the government. They always would like to take the technology or actions with high responsibility just for the best situation. However, if AI give really -- it would be a problem so we have to think about the responsibility coming from the AI as well.
>> MODERATOR: Raymond.
>> AUDIENCE: Just to chime in on the risk that we need to keep our eye on especially as implication for developing economies as we continue in this drive of applying AI for realizing the objectives of the SDGs, one critical issue or risk that I think might be amplifying the structural inequalities especially if the analog both soft and hard infrastructures that I earlier alluded to are not put in place. If this technology -- then as we continue to reproduce the structure of inequalities and -- it risks become an innovation playground for technology companies to experiment in. If adequate policy measures are not developed, as we know the Internet companies will not optimally work. Companies with significant power should not be allowed to write the rules governing their own behavior so we can't just rely on industry-developed standards and principles. We must insure regulatory intervention required on many levels especially with regard to governmental protection is put in place not just in the traditional format but in a confine that understands and integrates the risk of emerging technologies such as AI. Thank you.
>> AUDIENCE: I have another risk to add. To the extent that AI augments decision making, it can actually cause one to be less resilient when there is an interruption in access to that resource, that tool that is helping make the decision. I don't know about you, but me personally I use my phone for directions and one time I decided not to turn it on and I was making wrong turns because I forgot the route that I had driven 20 times. And so, you know, I think there is a risk of atrophy of one's cognitive skills if you rely too much on the AI. When the AI is not there you are out of practice as it were for even some simple tasks. Some simple decisions which for me was driving. And knowing the route from one place to another.
>> MODERATOR: I'm sorry to say I share the same problem but I wasn't very good at navigation to start with. So the software engineer from Tunisia, were you just waving? Anyone who hasn't inputted yet? I think most of us have. Okay.
>> AUDIENCE: Thank you. I would like to put together one thing that Donggi mentioned as well as Raymond. And that is you said explainable AI. Maybe for those that are not really in the technical side of artificial intelligence, this means that decision making should be identifiable and someone can be able to address why an algorithm to the decision of selecting one of the outputs of the solutions. The counterpart of something explainable is not explainable. For this term it's called the black box. You just input some data and then you get a result and then no one knows how this decision was taken. You just trust that the level of precision of this algorithm has done a good job. And then the contribution from Raymond was to also rely on setting rules or policies to consider things. I think in comparing this to the way that industry functions, there are many, many rules in the industry. So if you are going to work under certain conditions, you have to use safety shoes, safety glasses, safety helmet and so on. So in AI, I can imagine that the such regulations will in the near future become you have to use an explainable algorithm. You have to do mid-term checks to verify that algorithms are running with a certain level of precision. And then this absolute autonomy that we imagine now with AI that can he involve into something catastrophic may be under control if there are specialists guiding the artificial intelligence into doing the right decisions. Policy making is very important. Being able to explain why AI makes a certain decision is also important. And I think that it is something that we'll start to see in the near future.
>> MODERATOR: Sure. Thank you very much.
>> AUDIENCE: I would like to address the explainability a bit. One of the main features. The key characteristic of AI is that it's not explainable at least not in the sense of traditional programs. We can try to explain it to same level that we can explain the actions of a dog who has been trained to do some things but we can't really understand deep down how it works. And we don't really understand how human decisions work for that matter so we can't even demand it. We can't demand it from AI also. It's always a bit of a black box. We can try to explain it to some level like testing how it works or those kinds of things but it is not explainable, understandable in the sense a traditional programmer can go deep and point out why this exactly happened and change it. So that's something we just have to live with if we -- unless we reject AI completely, and I don't think we can.
>> MODERATOR: Or perhaps there is a chance to make AI even better and then we find algorithms to make them explainable at some certain level.
>> AUDIENCE: There is still work to do. It has to be done.
>> MODERATOR: --
>> AUDIENCE: I think that's why we have to understand how the process of the development of the AI. So from like 2012 to 16 I think AI was all about emotional learning. From 2016 to present the AI is about learning in the inference. And from now on AI is going to make action in the decision making. So I think the speed of development of AI is incredibly fast, faster than any other technologies. That's why right now we can call it as a black box. However, I believe that when there is a new -- maybe there is a reason why the AI make actions. So that's why AI is not a perfect technology right now. However, if maturing -- if it becomes a mature technology and then I think we can use it appropriately as well. So we have to take a look how it can be changed in the close future.
>> MODERATOR: So we've got three minutes left for closing remarks. I think you offered yours, Donggi, could we have Greg and then Raymond.
>> AUDIENCE: In one of the sessions yesterday the question is are you ready for the future? It's going to be I think quite interesting. The research is really continuing to evolve and I believe in 50 years hopefully less we will have explainable AI. But I think we'll also have many people using AI on an individual level, small companies just in ways that we can't quite imagine yet. So the best and maybe the worst is yet to come.
>> RAYMOND OKWUDIRI ONUOHA: Just to end up. It's framing a national AI strategy. Without a strategy it will be impossible and ineffective in measuring how much progress you're making on leveraging this technology to achieving the SDGs, in Africa and Tunisia and -- walking towards a national strategy. They look for AI in Africa remains positive. A few countries are moving in that direction. While some have that as a separate objective and some have embedded it in a policy. They are setting up some of the national Task Force on emerging technologies. It is important to support government readiness in order countries in Africa can benefit from the potential of AI in their economies. It is -- if you look at the 2019 government readiness in this, it depends familiar deincentivizing picture for the African continent for global things of this nature, there are presently no African countries in the top 50 and only two countries in the top 100. African countries must begin to design strategic and effective national AI policies that can work for the continent and I believe it should be part of the strategy involve a robust regulatory policy framework, involve capacity that is lacking. Framework with regards to the privacy and security, cybersecurity, cloud adoption initiatives, industry-led standards, automating the public sectors, which are huge leveragers of data. Someone has alluded from the discussion so far and also collaborative environments. In this digital area the governments can't doing it alone. They need to move the processes forward. In doing so I think in the next decade Africa will stand -- (inaudible) to meet the targets that have been set in the SDGs. Thank you.
>> MODERATOR: Thank you very much, three, two, one. I encourage you all to swap details. If you've spoken to me, give me a card especially the gentleman from my home government. Hi. Thank you so much for coming.
In Germany they knock on the table, by the way.