IGF 2021 – Day 0 – Event #102 Social and ethical perspectives of using Artificial Intelligence

The following are the outputs of the captioning taken during an IGF virtual intervention. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid, but should not be treated as an authoritative record.

***

 

>> (Inaudible) and safe. We all want to trust‑‑ 

>> We all live in a digital world. We all need it to be trusted and safe. 

>> We all want to trust. 

>> And be trusted. 

>> We all despise control. 

>> And desire freedom. 

>> We are all united. 

>> KINGA PASTERNAK: Hello there. Good morning, everybody. I wish to welcome you from the Katowice ‑‑ from the IGF 2021 at the GovTech Centre panel: “social ethical perspectives of using artificial intelligence,” that's our theme for today. But firstly, I want to introduce our brilliant panelists that are here for us today. First, the Digital Transformation Research Group, Justyna Duszynska, Head of Digital Transformation Research Group and COT Network Research Group. We have also Antoni Rytel, which is Deputy Director of GovTech Poland Programme. And Maksymilian Paczynski, which is the winner of the Impact Creators contest by Intel. We want to talk about artificial intelligence and its impact on us. And my first question is for Antoni. The Polish government adopted the policy for the development of artificial intelligence in Poland a year ago. How can we take from it everything good? I mean, what does it consist of? Could you tell us more about it? 

>> ANTONI RYTEL: Yes. So first of all, thank you for hosting us today to the IGF team. It is obviously a great pleasure to be here. Especially as we all are discussing obviously issues pertaining to the functions of the Internet but also to how we can interact with technology, in general, and with each other through technology. And the AI policy is obviously just a vehicle. It's something which we set as an outline of our objectives as a government but also as a country. And the first thing we need to realize is that the burden and the crux of the impact of artificial intelligence will not be delivered by the government, won't be delivered by the public sector. 

We are more of this proverbial stone which can cause an avalanche or at least help direct it in the right way. But we aim to be the ones supporting. So obviously the first thing we have is to use the potential this technology has for improving more broadly design for our citizens but Europeans as well. So our estimates say that we have an opportunity of increasing the GDP ‑‑ grow the pace of GDP growth by about 2.65 percentage points which is obviously a lot, but we need to put this into perspective. So we see, for instance, that the market is forecast to be about ‑‑ at about $200 billion by 2025, which is -- obviously seems a lot but is actually less than, let's say, fertilizer market. 

It's led the fisheries market. For some reason we keep talking about AI instead of the previous ones. And the reason for this is the pace of growth and the impact that it has on the society itself. And, for instance, three years ago it used to be $60 billion. So you can see sort of the pace we are looking at. It's not something we can stop. It's something we can look at and we can hopefully act on. So the policy itself consists of about 200 actions, which are split between six categories. But not going too much into the details, we have issues such as ‑‑ we have legislative issues which aim at facilitating testing, for instance, which is enhancing the security we have. 

It would also aim at introducing AI‑related issues to school curricula, et cetera, et cetera, et cetera. And then the other part is direct support. So things like development programs which, of course, is also very closely related to European policy, which we're very grateful for. But also through equipping schools with the relevant equipment, and in general, increasing digital awareness and competence of society. So this is the direct support. And then finally, of course, there are quite a few policy‑related actions. Obviously we're not, as a country, able to just unilaterally act on this issue. This is something we need to consult and coordinate with our partners, both European and international, at the invitation of the U.N. So the policy also sets out the goals out there. The goal overall is very simple. To have our country, but in the broader context as well, use the full potential this technology gives us and hopefully emerge successful from the ‑‑ from this revolution that's mostly actually revolves around AI. 

>> KINGA PASTERNAK: Hopefully the policy will help us all to develop in AI. Justyna, we heard something about the role of the state, but what is the role of the scientists and Science Centre building AI in Poland? How does Poland look like in this respect of comparison with other countries? Could you tell us more about it? 

>> Good morning, everybody, and thank you for having me on the session. I'm happy that we have the opportunity to discuss the topic because it's very crucial. And when it comes to the role of science and scientists, we can say that it's very essential. And that's why many universities in the EU have in their education offer AI courses, which I indicated to this subject. In France, Netherlands, Ireland are on the topic, at least with their programs. Poland, with its 34 programs ranks 11th in the EU. We are ‑‑ we are a leader in Central and Eastern Europe with our AI specialized courses, especially in higher education for postgraduate students. Since the Brexit, the UK is not in EU, but the UK, with its almost 1,300 programs would easily dominate the list. 

And when it comes to producing ‑‑ the leader of producing AI specialists, we can say that the leader in Europe is, of course, France. And from French universities, graduates of French universities, make up 29 percentage of AI top specialists in the EU. It's twice as much as in Germany, which is on the second place, and France is also the country where most of top specialists want to continue their career. Poland, even though Poland has 4 percentage share in the EU in terms of producing top AI specialists, it's at the bottom of their ranking of best specialists. Same as other countries. That's mainly due to brain drain and migration of specialists to other countries. 

And that's why we have to think how to keep them in our country, how to enable them to acquire new experiences and to get new competences. And that's why the core of activity in the Lukasiewicz Network, Research network, is a cooperation between the scientific community and business. And we think that's a very good way to give our researchers to develop. And two years ago in the Lukasiewicz Research Network, we launched a system of challenges. That's our unique and original idea to give in a quick and effective way an offer which meets our clients' needs. 

And thanks to this system, our business partners, in 15 days, have an idea how to solve their technical problem or how to build a solution which they need. And after 15 days, we have a team of experts who are ready to cooperate. And what's more, what we can do is also to engage AI specialists in international cooperation. And, again, in our network, we established a center for foresight and international cooperation, which aim is to connect and support our researchers to cooperate with specialists from other countries and to involve them into a partnership and consortia where they have an opportunity to develop and to get new experiences. 

>> KINGA PASTERNAK: Let's hope to be first in Europe, EU. Maksymilian, as I told you earlier, is a laureate of the Impact contest. Maksymilian, could you tell us how and why did you get involved in building AI solutions, and how do you, in your perspective, how should we encourage young people to do the same? 

>> MAKSYMILIAN PACZYNSKI: Good morning. I would like to thank you for the invitation I received. I am very in person and it's an honor to me that I can be in such a place. I am Maksymilian Paczynski, and I am 17 years old. I won the international competition Impact in The Creators which consists in creating a project related to artificial intelligence. My project deals with the problem of today's world, which is exhaustion. The main goal is the drivers up from microsurface because during it drivers lost focus, which in the worst case can be tragic for them and for other people. But why did I start working on this project? Because I am a person who is curious about the world, and I found the topic such as AI and participation in this project development in general which later became a passion. 

When I started the training, I had no idea of such topic as data science, computer vision, or natural language processing. Currently at schools it is said that there is such a thing as AI, but it isn't developed. Where can we use it, and what do people gain from it? Poland is doing everything to go in this direction. It's amazing to me that the brighter future has a budget over 1 billion. Recently I have tried to imagine such a large amount, but I still cannot. But it doesn't cover high schools. And I think this is a big mistake because high school kids have the most ideas about around the world, about the general world. Education is directed more towards constructing something than understanding the use of something. I believe that this method is inefficient. Why? Because, for example, on computer science, there's a series of algorithms, each of which is different. 

But I don't learn the most important, where I can use them. In the case of young people's interest in artificial intelligence, the aspect of use is the most important. Ordinary students even if he or she is sitting all the time with the best books, they don't create ‑‑ for example, they understand artificial from basic. Why? Because his or her high school level of mathematics will not allow him or her to do so. But what they can do is understand the meaning of applying it in real life. However, the question is how we can have young people in this. I believe using a ready-made model where – ready-made programs where students only put data, that is the most important value. Satisfaction. Students write a few lines of code which consist of ready‑made models has the impression that he or she created from basic. This gives them motivation to improve it and analyze to check created work and other data. It develops his general skills and curiosity to understand the themes that are happening around him or her on the new trend in today's world, artificial intelligence. 

The Intel-I4 Youth Project was prepared just like this, and that was the success of this competition. As a beginner, I could understand the application first and then move to the next step of development. It is also amazing news that minister of development joined the next edition of this competition. And I took first place in the world. And all of this is the key to how we can encourage children and young people in artificial intelligence. 

>> KINGA PASTERNAK: Thank you, Maksymilian. As you could hear, Maksymilian won the first prize for creating an AI system that helps you not to sleep during driving. So once again, congrats. We know that we have lots of youngsters that are creative, and they can develop lots of brilliant ideas. But I want to talk about the perspectives out of the other side, out of the state perspective. And what does perspectives does Poland have on the development of the new technologies including, of course, AI? And I wish to look at this from the perspective of the citizens and the country, and I wish Antoni to tell us more about it. 

>> ANTONI RYTEL: Yes. Thank you. This is obviously a very complex and convoluted issue, but developed very briefly, I think, Poland is quite uniquely situated in terms of the mix of skills that does occur, and our students and our graduates, we are actually the only country, as far as I know in Europe which does programming from the very first grade. As Maksymilian said, we've also launched the laboratory of the future, Future Labs Initiative, which is, in fact, the largest investment in modern and innovative education in the history of this country. And thanks to this element such as microcontrol and 3D printers will be a part of every primary school in Poland's teaching of new technologies but not just computer science but hopefully this is at least our objective also other disciplines, because we think that without ‑‑ that the incorporation of new technologies is actually agnostic. 

So it's not a discipline in itself. It may be if people wish to pursue this. But it's mostly facilitated for teaching of pretty much anything, really, and developing those skills which we hope the future generations will be equipped with. So there's definitely a lot of educational potential. We are also one of the largest producers of AI specialists, in general. As said, we are lagging behind a little bit in terms of particularly AI specialists. But when you look at the market, in general, there is a really large supply in terms of going out to universities. Of course, we still, just like any country, pretty much, we still ‑‑ there's still a deficit of about 200,000 specialists, especially in those very narrow and particular areas which are perhaps rarely used, but when they are, this is probably where the largest value is generated. The issue, I think, and something which should be set as a challenge in which we did actually already set a challenge for ourselves is direct this potential which undeniably exists into the areas of highest potential yield for the economy, and which we believe indirectly also for the society at large. And those are areas such as artificial intelligence, such as Internet of Things, such as technology, cybersecurity and a few other priority sectors. 

We think that this requires a slightly different approach, but we believe that everyone should have a set of basic competences which do not only revolve around digital skills but also around things like problem solving, creativity, collaboration, et cetera, et cetera. And then there should be an infrastructure, both research‑wise and business‑wise, which is able to take someone with this sort of background and then direct them into more specialized fields where the state isn't the one which will be directing them or coming up with them, because this is something ‑‑ this is something the market really knows much better, and we do not want to be redundant to them. So, obviously, the way is through collaboration, the way is through multiparty discussions, the way is through including and engaging the perspective of the private sector in what we're doing here as a state. 

And I think that, you know, we, as realized, the role we have in this ecosystem, which is the supportive role, both legislatively and financially. I think that this is the way forward. I mean, we already have research groups, we already have grants, we already have quite a few tools and programs. Both national and European. And I think that if we're able to work together to direct them towards those areas, which actually have the highest potential yield in the next few years or decades, even, then I believe this would be the way to go about. We are suddenly at your disposal, so to say. 

>> KINGA PASTERNAK: We have a great surprise because I want to welcome one more guest in our team. I would like to welcome Ruslana Krzeminska, which is from the Republic of Poland. I just want to thank you for joining us. We had some technical issues, but I'm really glad that we could manage them. Ruslana, can you tell us how can we prepare the state for the sustainable development of the technology in accordance of our traditions? 

>> RUSLANA KRZEMINSKA: Hello. Good evening, good morning. I'm glad to be here, and I’m sorry for I’m late. At first I must say this. I love AI. This is my crush and apple of my eye, totally. Yet if people are technological beings at heart like me, what are the implications of this? Ethical frameworks develop an interaction with technology -- technological capabilities push back the limit between what we can produce and manage the events that impact our lives. Science, ethics, religion are more than just human activity. The work of people. 

They are also determined by technology. Okay. I have another question. Ethics for new technology, is it a good way? Is it correct? I guess so, yeah, this has to work for everyone's sake. So don't forget values. Don't forget how strong we are. After all, we are lord and father of AI. On the other hand, please remember that modern AI has a high level of autonomy. And this is correct. It’s okay. This is not a big deal, you know. The universe are roomy. So what we need to do is find a harmony between nature and technology, between soul and mind, between heart and brain and ask people a simple question. Are you ready for AI? I guess so. I guess so, yes. So let's invest in education and public debate and public consultation. State and social production systems should promote social inclusion through activity in the public debate on AI. 

>> KINGA PASTERNAK: Thank you for that voice. We had the state perspective. But I want to ask Justyna, can we ask more in‑depth projects using AI in the coming months? I mean, you work with it, so you have the greatest knowledge about it. What areas will they cover, and what will they be relying on? Could you tell us more? 

>> JUSTYNA DUSZYNSKA: Yeah. Currently AI solutions are mainly deployed in the digital sectors such as telecommunication, finance, banking, media, retail, health care, and that's partly due to the amount of data that are being created and that are available to AI processing. And in terms of domains and applications, AI is deployed in areas such as data analytics, computer vision, NLP, neurolinguistic programming. But currently we can expect that most of ‑‑ maybe not most, but many innovations will be deployed in solutions based on IoT, Internet of Things. And in 2021, it is estimated that in the investments in IoT solutions will increase by over 12 percentage, and by 2025, the investments will increase double digits every year. 

And we can expect that these new solutions will be deployed mainly in areas such as smart industries, smart homes, smart cities, agriculture, health care, security. So there are many different areas when we can expect these solutions. For example, in our network, research network, we use this technology in solutions which are applied in production, robotics, IoT, too. But now we are working, for example, on an automatic text translator ‑‑ translation ‑‑ text translation system into a Polish sign language. And this solution is fully ‑‑ solution fully is based on AI technology. And that's an example of a solution which shows how AI contributes to accessibility of people with special needs. 

So as we can see, this technology will be applied in many different fields, and so we can expect them even more. But I think that regarding to the topic of our session, what's more crucial is the question, if we are ready for them. If we want them. And if we are aware of the impact on our life. And AI technology is both transformative and disruptive. It's very important to be aware of it. And I think that each of us should ask this question. And I would like to leave us with this reflection, if we are ready and if we want these solutions. Thank you. 

>> KINGA PASTERNAK: I surely can ‑‑ I assure you that we do want it. But I want to ask about the citizens' expectations of the countries' digitalization from the perspectives of the same. Can you tell us, Ruslana, what are the expectations, and what are the results of the work to fill in with that expectations? 

>> RUSLANA KRZEMINSKA: Okay, yes. I walk into the same In Poland, and I see it al. Just kidding, but maybe not. Expectations, we see it all, yeah. And most of them are fears. You know, who do you trust? How do you know? By how they appear, what they say, what they do? How? We all have fears. And one of the most fears of AI is just another fear about it and what it's capable of. Another major fear is, okay, AI is job killer or bad people doing bad things. And last but not least, the superpower like superintelligence, it kill us for sure. Okay. So the government, the state, has the goal of supporting in development of quality education. We must say that AI is not just a nightmare or dystopia. AI is a goal, our hope and future for health care, longevity and happiness. So, you know, it might be a mistake. Maybe it's not the future. Maybe it's now. So stay cool. Be calm, be smart, and take it easy. We are the future. And the future is now. Thank you. 

>> KINGA PASTERNAK: Okay. My next and last question to Maksymilian is what vision on AI do you have, on the perspectives as a youngster, of you and all of the young people you represent here? 

>> MAKSYMILIAN PACZYNSKI: As I said, I'm 17 years old. But as a person who won the global competition based on machine learning, I can say what is a vision of young people about artificial intelligence. We are very curious about AI. And as you can see, we want to participate and learn more and more about it. However, with us in our growth of information which is around of us. We base our knowledge on these topics on the Internet where different people have different opinions. I think such news about artificial intelligence is that almost 3% tradition in greenhouse gas emissions will be 2030. Another news is implementation of programs aimed at reducing the number of road accidents caused by falling asleep. And I think the best is that even now we have incredible accuracy in detective no plastic (?). So I think those news about AI are very interesting for young people. 

And that's why they should be promoted more. The interests instilled in the I.T. world. We already know that we need to further educate ourselves on the concept of AI, because in the future it will be our everyday friend. Not an enemy. And here we must ask ourselves about the revolution 4.0. This is an aspect that is exciting but also scares us. As we know in the next 20 years, almost 30% of jobs will be automated. However, as it was during previous evolutions, many jobs will be created. This is where the fear of young people is born. We don't know in what direction to study because we don't know if we'll really be able to survive anti-retirement. 

In my opinion the best way of the solution is to talk often about not only about the past but also about the future. I am happy that the Polish government has begun work on this by introducing the subject, the present day. However, I believe it should be more focused on understanding the concept of automation. Thanks to this, young people, we understand what it is and will reduce their fear of an uncertain future. And they will understand the most important thing of today's world, for bodies artificial intelligence. 

>> KINGA PASTERNAK: I have the other open questions for all of you. If there was an AI solution that has all of the knowledge of the world, what question would you ask it and why? Antoni? 

>> ANTONI RYTEL: I know the answer will be 42 as far as I recall from literature. But going back to this literature, I think it's often that especially with technology that we do get the answer but we do not get ‑‑ we do get the answer, but we sometimes don't know what to do with this. And I think AI is quite notorious for providing ‑‑ I mean, it will all calculate something eventually. But it can lead to many, many interesting conclusions. One instance I got from a project we did with the Department of Defense was that there was a project which aimed at detecting tanks or enemy armored vehicles. And those are spectacular working algorithms. It was almost always able to distinguish a tank or could identify an ATV or an APC. And when they actually deployed this during actual operations, it had horrible results. I mean, it literally didn't work. 

And the reason for this was that they only ‑‑ the distinction between tanks was actually – I mean, the training set was compiled in a way where tanks were shown during the day, and all the other vehicles were shown during the night. So what the algorithm did, it learned that whenever there is day, whatever it sees, it's a tank. When it's night, it's never a tank. And this obviously meant that the result was useless. And the very smart people who worked on this didn't actually notice, because ‑‑ not because they didn't have the expertise or the skill or the knowledge, but because the calculations themselves, they occur on a level which is at this point already beyond our understanding. 

So I think if I were to – I mean, I think the largest question ‑‑ and I'm not sure that AI will actually help us answer it, but hopefully we may get it sorted out eventually ‑‑ is how to ‑‑ what kind of questions should we input to a system in order to get the answers we actually want instead of the answers which we think we want or the answers which do not actually provide meaningful solutions to us. So I think that the art of asking questions and rhetoric, something which we came up with a couple of millennia ago, is actually more than relevant in the current context as well. 

>> JUSTYNA DUSZYNSKA: I would -- if I had the opportunity, I think I would ask AI the question which I think all of us should ask to ourselves, where do we need you? Yes. Where the technology can be useful for us and helpful and where it can, you know, support us and where ‑‑ I know that the AI technology can answer ‑‑ can't answer where ‑‑ is it harmful, but we should be aware that it is in some spheres harmful as well. And as Maksymilian said, I think it's very impressive that young people are so aware of that, that we need education, yes, to be conscious of all of these aspects of using this technology. And I think we should invest in education and build awareness, and that's the most important. 

>> KINGA PASTERNAK: Thank you, Justyna, for that opinion. I would like to ask any one of you, is there any question? Please come up to the microphone. 

>> AUDIENCE: I’m (?). To Ruslana, why happiness? Do you think a more predictable world will take the humankind to a happier stage? Thanks. 

>> RUSLANA KRZEMINSKA: I think it's a really tough question. It's a good question, I think. You know, you hit the nail on the head, I guess. Why happiness? Yes. Maybe you are right. Maybe the world will make humankind a happier stage. Maybe you are right, sir. 

>> KINGA PASTERNAK: Is there any other question? Come up. 

>> AUDIENCE: I have a question to Mr. Antoni. Well, you've mentioned that we are not really capable of understanding how the algorithms are already applied, and the topics also touch the ethics side. So is it ethical that we can apply those artificial intelligence in the kind of vulnerable sectors? You guys mentioned the medical sector or those things, that default may be a disaster at some point, and who should be responsible for that? What kind of supervision? 

>> ANTONI RYTEL: Well, yes. Thank you. It's, obviously, again, something we could hold an entire conference on and probably not arrive at an answer even then. But I guess at least the way I approach this issue, of course, it's not something which we should all feel compelled to, it's not something which you really get a say on, as a society, as individuals, as states. It's something which will occur sooner or later. And the reason for this is brutally simple. And it is that, you know, it's easy for us to discuss whether it's ethical, whether it's sustainable, whether it should be utilized or not. But in the end, I will find it rather difficult to explain to someone who did not receive proper treatment due to the lack of implementation of new technologies or people who ended up not being properly diagnosed or waiting longer, too long, perhaps. 

You know, we ended up not using something which could have saved them because we're not sure whether it may or may not be faulty or not. And I think at some point while we do not know whether AI will not generate problems of its own, or we're pretty sure it will, but we don't know them precisely, we're pretty sure that it will solve quite a few problems with he know already. And, obviously, the issue of responsibility is key, I think, and this has also been presented often in numerous statements that the government has made. The responsibility obviously has to lie on those who either have created these solutions or have taken responsibility for using it. 

So if you apply something under certain knowledge, you're the one who makes a certain action with support of the AI, not the AI makes it for you, right? So obviously, there's, I think, the technology is still not there to replace entirely the human factor in decision‑making. I don't think there's anyone who really is considering going that way. This said, I think that, you know, there will be, just like with any fundamental technology, there will be tough questions asked, and answers will probably always ‑‑ will have to be worked out on a case‑by‑case and sector‑by‑sector basis. This will take quite some time, I think. But I really don't believe that we have much of a choice in terms of whether we end up using it or not. 

The question, which I think we can answer and which we will end up answering, each of us, is whether we want it to be applied in this instance and under which conditions we want to do it. I don't think we can run away from this. We have to take it head on. And I think just like with any other technology which ended up reforming the way we think about societies. We will end up succeeding because that's what we as humankind do. We're supposed to overcome challenges, not run away from them. 

>> KINGA PASTERNAK: Thank you for that answer. Is there any other question? Oh. We have another one. Come up, please. 

>> AUDIENCE: Just a quick one. How much do we need to understand AI in order to use it? For reference, we don't understand how a dog really works. We train them anyway, but still we don't understand how they work, in their mind. How about with AI? 

>> KINGA PASTERNAK: Do you want to ask any particular person to answer it? So we are open. 

>> ANTONI RYTEL: I think a brief answer is we don't, but hopefully there is at least one person who does. So there is at least one source of reference, which will help us come up with this. You know, eventually, of course, you're right. It's not something which ‑‑ the fact that we are not entirely sure what's going on under the hood of this, something which would refrain ‑‑ which would detract us as users from utilizing a technology. We don't understand most of the technologies we're using every day. And this will not change in any foreseeable future. But I think that we didn't design the human brain, but ‑‑ and therefore we have no reason for knowing how it works. 

But when we do design something, we should probably at least be able to figure out whether it makes mistakes or not. And if it does, then how and why? I think ‑‑ I don't believe this is a barrier, but obviously not ‑‑ we're not going in the direction of every consumer has to know the precise details of an algorithm. It’s not something which will ever occur. But I really think that those who design something should really take a responsibility for some ‑‑ for having at least a directional knowledge of where it's going and how can it be used. As my colleague said, application is key here. And while about 80% of us have an idea of the existence of AI and those people also use these solutions. Bu only a very narrow group of people know the exact details, and I think this is sort of the way it will end up being also in the future in practice just like any other technology, really. But, of course, the level of understanding and skills needed to make full use of this technology just like any other will vary by application and by institution. 

>> KINGA PASTERNAK: Thank you. It is said that our time is ‑‑ has come. So thank you, once again, for being here. Thank you, Ruslana, thank you, Justyna, thank you, Maksymilian, and thank you, Antoni, for this interesting, incredibly interesting panel. I think we could talk about AI more than an hour that we scheduled. Thank you for being here. Thank you for being here with us even online. And I hope you have a very great IGF Day 0 day. Thank you once again. Have a good day.