IGF 2023 – Day 4 – Town Hall #63 Impact the Future - Compassion AI – RAW

The following are the outputs of the captioning taken during an IGF intervention. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid, but should not be treated as an authoritative record.

***

 

>> Good morning.  Welcome to the town hall.  A special panel dedicated to impact the future under the Compassion AI.  I'm Robert, I'm responsible for the Information Society.  I'm engaged in many international expert designing the artificial intelligence approach, the policy and some law and recommendations.

I have a very special guest in our town hall.  Some of them are present here, some online with us here in the room, we have David Hanson, Hanson Robotics.  So you know him probably from that.  With me on the right side is a host of GAIA Foundation, and the content creator of the Compassion with me.  My co‑moderator, Damian Ciachorowski.  And online, the chief, the president of the board of the GAIA Found and online we have Tom Eddington, yes, and we have another guy, Marc Buckley.  And also Marko Grobelnik.  Online could be, but ‑‑ it could be also difficult to participate, and Emma Ruttkamp‑Bloem is from the university of Pretoria.  We would like to present that during our sessions.

On the beginning, I would like to represent as a first thoughts of overview, worldwidely, some output delivery of international engaging to produce some recommendations for artificial intelligence but first of all, we need to say why we organized that meeting, the Town Hall.

There were many papers on artificial intelligence and how to responsibly implement it and the best approach to artificial intelligence but still we have a competition around it.  It's some asymmetry between the ethical approach, what was develops a trustworthy artificial intelligence approach to a practical deployment of artificial intelligence.  We still from the ethical point of view are in the unitarianism, and we can implement any sources and scale our business model.

If goes to the practice from the ethical perspective, but we are still in the process.

The landscape of policies and relations comes from the OECD policy recommendations, UNESCO ethical ‑‑ ethics for artificial intelligence, also European Union with the guide lines for trustworthy artificial intelligence and artificial intelligence act, what probably be the first binding instrument around the globe from the legal perspective, how to empower the ethics and implementation of ethical dimensions to the artificial intelligence system and organisations.

Next binding instrument will from the Council of Europe, that is the first organisation around the globe that would like to promote the first treaty in the domain of human right, democracy and artificial intelligence, but serious talk is still continuing among the Transatlantic technology experts.  That team is discussing the topic of value chains, and also the approach to the artificial intelligence trustworthy or responsible.

This is so very important.

NATO is also engaged but from the standardization point of view.  How to share data and shape the artificial intelligence algorithms from the role of NATO.

The scientist perspective, it's still not finished.  We started where anybody could do anything with artificial intelligence, from the technical point of view.  We got the trust and the main element of any recommendations, what was shifted and converged to trustworthy artificial intelligence.  We still feel and now that it's some gap over the recommendations like the compassionate approach.

We invited the GAIA Foundation to say about this a bit more.

As the experts and the policymakers, we get some difficulties how to approach and solve problems and deal with benefits of artificial intelligence and how to manage the risk, and so the stage starred from the control perspective and supervisory, especially the human oversight.  But to the very good approach, what does this addition, something more than governing stewardship.

We still before the care approach and finally maybe this is go point now of IGF to talk about the compassion approach to artificial intelligence.

And from that perspective, we would like to underline some values what is in the loop of our discussion today, coming from the many papers, what I would like to underline.  And the main compass for solving any conflicts among values was between UNESCO, the special triangle between the individual dignity, and well‑being and no harm.  This is the compass for everything.

We, of course, as policymakers started to deal with asymmetry, and access to knowledge, and informationive point of view and educational point of view, but still, the very beginning nation of flourishing the ecosystem, engaging the SM Es engaging the scientists and even the policymakers to build a solid ecosystem and not for only once for JANZ, but everybody who would like to participate ‑‑ to producing benefits for planet and benefits for people.

That conjection is very important.  We must see not only benefits of people, not only benefits of planet, but in that conjecture, that kind approach of the recommendation of OECD.  We tried to look for how to find a new approach, what could cover the gaps.  The gaps still oneness in diversity.  This is the beginning base of developing the compassion AI on my approach and we have our guests today.

And now I would like to give the mic to my cohost Damian, what is GAIA and what is your work here today?

>> DAMIAN CIACHOROWSKI: EDI A few years ago, we understood very well with David Hanson and my friend Piatr age, we are at a crossroads.  We didn't have time to make the mistake not only because the climate change and the war and pan dammia, et cetera.  But mainly because of the artificial intelligence, we know that before we started creating the Internet, we didn't ask ourself how danger can be the Internet.  We still have the time to decide how the future of AI will look like.  The first question was ethics.  What kind of ethics, Polish ethics, Russian ethics, Buddhist ethics, it depends on the culture and the religion.

When we started to study different religion and different civilization and different culture, we understood that each these regions, each of these if I loss fellow in ‑‑s have one thing in common, without this, things will ‑‑ it's no religion, it's compassion.  Without compassion, we have ‑‑ there's no Muslims.  Without the compassion, we didn't have even civilization and evolution because we evolve, because we know how no cooperate.  Without the compassion, we have no evolution.  Because of this in to 2020 years, we created the GAIA Foundation.  We decided to decentralize and in 2021 during the preparation for IGF, we announced GAIA to the public and we started talking about the compassion and the things we would like to achieve.

One year later, in Warsaw, during the virtual Florence, and now I would like to explain what is the virtual Florence.  The virtual Florence, it's a ‑‑ hmm, it's an international group of experts from different fields.  We split them in four groups.  First business, but not only business, the politics and the media, second, technology, science and, and the fourth one is spirituality.  It's not only the religion leaders but spirituality, it means psychology.  It means arts it means ‑‑ and it means spiritual teacher too.

And why we do this?  Because we are thinking that we couldn't create AI of future just based off the IT guise.  The AI of the future should be created from people from different fields because AI, it is our future.  It couldn't be created by just one group of the people who decide in which direction we should go., especially when we are looking at our civilization and our region.  Each civilization have mix of the amazing genesis, amazing ideas.

And during our first virtual Florence meeting, we developed ‑‑ we developed special tools for collectivity, we collect the experts from different fields and give them the tools to create the idea in which direction we should go.  First we create a definition of the compassion.  If you would like to create compassion AI, first we should know what is compassion.

Second, we create a special IP, it compassionate AI models where we create ‑‑ we understood that if you would like to create AI of future, we should use the loop in which we have not only human and AI, but when we have the two very important things which always appear, especially our ‑‑ this is what we understood, our workshop, but the Biggel things which we are facing now, it's a fear.  A fear because we are afraid of A I.  We are afraid of the future, and when we are afraid, we couldn't do anything, because the fear, it's stopping us.

Then we understood that we should not only have compassion, but we should work with the fear of the humans too.  Then this is our compassion AI models which help us which we believe it helps us later in future to teach A I. compassion or compassionate.

Under first ‑‑ on the second ‑‑ yes?

>> ROBERT KROPLEWSKI: If I could intervene.

That is interesting, because what I see in that model, you ‑‑ your thing is the proposition, how to deal with fear and convert this into compassion is one approach and how to deal with humans and to redesign the artificial intelligence system, be able, finally, to deal and to express the compassion experience.

>> EDI PYREK: Thank you very much, Robert for your explanation.  You did better than me.  During the second virtual Florence in 2023, we start ‑‑ we tried to put our idea into the product.  And we ‑‑ after the one day or two days of workshops with the experts from different fields and we have physicists, the best physicists in Poland but apart of this amazing people we have the Android with AI.

We have the Android with A I., because, again, if we are talking about the future of AI, we should include AI into this conversation, into this work, and during the workshop, we come to the conclusion that all what we can do, we can create the tool where we can ‑‑ with the ax I, that we can teach people compassion, because we think if we would like to create a compassion world, a compassion AI, first we should start thinking how we can be compassion, how the human being compassion.

And yes.

>> ROBERT KROPLEWSKI: And now what I understood from our ‑‑

>> EDI PYREK: Conversation.

>> ROBERT KROPLEWSKI: Our conversation offline, output of that virtual forensic is a call for developers.

>> EDI PYREK: Yes, it's a call for developers, it's ‑‑ it's ‑‑ yeah, it's competition.  How to create an environment or a platform, a solution, which uses gameification, the flow state to teach psychological safe way, the things like positive behavior.  How to take care about the nature.  How to treat the people about our arts and development.

And it was happening during the 2023, during in March, during the conference in South Brook.

Our next step, it was Geneva AI conference for good.  But during this conference, we have the kickoff, the meeting about the GAIA Guardians.  It's a platform, during this kickoff, we had amazing people like David Hanson and Stephy?  Baracki, the creator of AI for good in UN and together with them, we started working on the platform.

Our organisation who will create decentralized AI in the future.  Our idea was to collect all the people who want the descenterrallized AI.

And now we are in Kyoto.  We are here to present our idea to the policymakers and to remind them that if you would like to have really big change, we couldn't work only from the top to the bottom, but we should work from ‑‑ from the up to the bottom.

It's amazing, that, we work on the law, the AI guys, the teachers, the fathers, the spiritual teachers, the mothers, the cook, we all should work together to create this AI of the future.

What?

>> ROBERT KROPLEWSKI: What I understand from this, you try to propose, talking about a law, recommendation, policy making principles.  It's one thing.  But we need some very concrete product, yes, some special technical environment?  A sample.

>> EDI PYREK: Yes, of course, it's not enough to talking.  I mean, just, sorry, but quite often I'm using the words "intellectual masturbation" these are the things which I have too often ‑‑ I ‑‑ I witness during all kinds of the conference.  We have amazing conversation.  Everybody thinks that I am the best.  We know how to save the world.  And everything is ending after the conference.

We need to product.  We need to call to action.  We need to have the impact and this is why ‑‑ at milestone, it will be this March 2024, in South Brook, we are making AI Impact Summit.  We would like to attract all guys, all people who are working with AI and impact and show them we are not creating AI for fun, and watching porn and have can Is and driving better cars.  We need AI.  AI will be most dangerous thing in the world.  It can be the tools for massive destruction and it will be the only one hope for us.  Without AI we couldn't ‑‑ with AI, I deeply believe we can solve the problem of climate change, the sicknesses, with illness, with the war, et cetera, et cetera.

But only in the situation when this AI will be decentralized and will be based in compassion.  When we are thinking about sustainability goals, only 16% of our dreams happen now.  Why?

Because we have no compassion.  If we have the compassion, we'll not kill the nature.  We have the compassion, there's no war.  This is why we need compassion.  We need AI with the compassion, because we need AI who will show our blind spot and teach us more about our humanity and teach us about arts and consciousness even, about emotion, and this is why we need compassion AI, to teach us, to be our partner.

>> ROBERT KROPLEWSKI: Thank you, Edi.  You believe that A I. could be fish from the compassion point of view?

>> EDI PYREK: Yes.  Without the ethics, without the compassion, it can be dangerous.  We are screwed.

>> ROBERT KROPLEWSKI: Thank you for that.

Now we would like to ask my colleague co‑moderator Damian to play the video from Emma Ruttkamp, from the Pretoria university.  She was the ethical accommodation, coming from UNESCO.  Please, Damian, make it play.  It's coming.

Yeah?

>> EMMA RUTTKAMP-BLOEM: Thank you very much for having me.

I'm very sorry that I can't join you in person and not only that, I can't join you for questions.

Unfortunately technology is not that far evolved that one can join a Zoom meeting from an Arab ‑‑

>> ROBERT KROPLEWSKI: We are having a technical issue.  Give us some seconds.

>> EMMA RUTTKAMP-BLOEM: And unfortunately I can't join you for questions.  With you can't join a Zoom meeting from an airplane.  Please connect with me on this talk if you have any questions or you just want to have further discussions.

So the title of my talk is "Global Compassionate AI Ethics" he will you what I think that could be in the context of the UNESCO ethics of AI.

I want to first reflect a little bit on why is AI technology important?  Why is this ‑‑ where does all of this agitation come from?  So this is a technology that's advancing at high speed and it is a technology that to various degrees and in various ways threaten UN agency autonomy, and we want human‑centered technology that in various degrees keep humans on the loop.

Secondly, it's a technology that can leverage massive amounts of complex data in ways that humans can't do and this is part of the reason of developing the technology, of course, but it also brings, of course, certain concerns.

And thirdly, it impacts humans in all facets of their lives.  More far removed ones in legal in accountability and responsibility, but inclusivity and nondiscrimination, in terms of the right not to be manipulated, the right to integrity and so on.

Also this technology is so fascinating because it has an immense powerful good and on the flip side, it has an immense powerful harm.  So we have to figure out is how to maximize the power for good and minimize the power for harm.

So against this background, I want to talk to you about the global recommendation on the ethics of AI because for these reasons and also based in our report from the world commission on the ethics of scientific knowledge and technology, the UNESCO General Assembly in the 40th session asked UNESCO so elaborate a global instrument on the ethics of A I.  This work took from April 2020, smack in the middle of lockdown, until November of 2021, when 193 Member States adopted the recommendation.

Just shortly again why, from a slightly different perspective?  Why do we need this recommendation?  AI technology threatens harm to individuals in such deep layers of their lives that you willly the harm will be to humanity as a whole.  There's the complexity of the issue that I have already talked about and then realizing sustainable AI development requires international cooperation.  The companies that develop this technology are transnational companies and so we need global cooperation in terms of ensuring responsible governance of these technologies.

Also, widening the inequality gap in the end will backfire on everyone.  Think of the African continent, which is the continent with the lowest median age.  If Africa is left behind again, it will impact on the whole world in various ways.  Then, of course, what is the value of the recommendation?

And this is very, very important to understand, and to also realize, it will lead to cooperation and shared responsibility of multiple stakeholders across various levels of region and national communities.

Now, if we just take a second to think about the aims and objectives, obviously, this recommendation aims to provide a base o make AI systems work for the good of humanity, to bring a globally accepted normative instrument, with inclusion issues of gender equality and protection of environment and ecosystem.  It's about the good of humanity but it's also about the good of the environment and ecosystems and there's a focus on inclusion issues, speckly in terms of gender.

So on the whole, the recommendation aims to enable stakeholders to take shared responsibility based on a global and intercultural dialogue.  So here is the first glimpse of the compassionate AI.

The values that we identified in the final version of the recommendation, that Member States are identified in the ‑‑ for the final version respect protection and promotion of human rights and fundamental freedoms in human dignity, environment and ecosystem flourishing, ensuring diversity and inclusiveness and living in just societies.

The principles we quite a lot, like safety and security, fairness and nondiscrimination, the right to privacy, human oversight and determination, transparency, explainability, responsibility and accountability, but we also have a new one, proportionality and do no harm.  Which is basically about situating a risk‑based approach in the core of recommendation.

And also, we have sustainability as a principle.  We usually, if mentioned it's a value.  And this is to in a sense concretize the value of environment and ecosystem protection because while this technology can really help to reach the SDGs, it can only do that if we understand that there is a continuum of factors that impact on whether ‑‑ on the level of realizing these goals in various regions of the world.

And then we have the multistakeholdery stakeholder and adaptive governance and collaboration and we have awareness and literacy as our last principle because civil society is an AI ethicists biggest friend.

We did not stop with values and principles.  We wanted to figure out how to focus on the how and not just on the what.  So we had to find a way in which to make the recommendation concrete enough to make an impact, firm but also at the same time, open enough to ensure adherence, supple enough to have validity in the future, which is a really tall order as you all know.  And then somehow to ensure that the actions ‑‑ some of the actions will achieve trustworthiness of this technology.

In order to do this, we have identified 11 areas of policy action, and we gave detailed actions in each area of policy action, and so that Member States have some guidance on how to concrete the values.

UNESCO is completely committed in supporting Member States in the implementation of this recommendation, and UNESCO has already developed a methodology for ethical impact assessments ‑‑ (No audio).

That them be at different stages.  There are various other ways in which UNESCO is willing to support Member States.

Now, having given the background.  Recommendation, let's take a few seconds to just move now into the compassionate AI.  What could this be?  I want you to just honestly, just take a second to reflect on each ‑‑ on the answer that you would have for each of these questions.

Who are you?  What would be the main quality that you would use to describe yourself to other people?  What determines the nature of your thoughts and actions?  What determines your agency or your autonomy?  What link is there between your autonomy and your moral responsibilities?  And what does respect for your autonomy require from other moral agents?

So if we on the basis of those questions, I want to tell you about my notion of positive AI ethics.

And I do this by just quickly introducing you of an approach in philosophy when we consider issues of meaning of life and we think about how to achieve a life of well‑being.  If I loss pers such as Martha neusbaume and capabilities are political entitlements that impose duties on governments to enable its citizens or these citizens to realize lives of well‑being.

In the context of AI, what will allow humans positive liberty and capabilities so not very political entitlements, what type of entitlements will do this.  There is a distinction between negative and positive liberty.

Positive liberties is more interesting because it is about doing something with this liberty, doing something so that you ‑‑ actionize the liberty that you have to live a life of well‑being, to take control of your life.

And this thing moves into the notion of capabilities that is about how to ‑‑ what do you need to achieve a life of well‑being not having the ideal only of a life of well‑being.  I think obviously the kind of entitlements that we need are ethical entitlements and these place positive duties on all AI actors.  What are positive duties and this is also philosophical, the distinction between positive and negative duties but more recently, some wrote on this in the context of AI ethics.

So negative duties is simply do no harm.  So positive duties again is the more interesting bun because it's about protecting the vulnerable such that no harm is done on to them.  So it is about doing something with the fact that you have a duty placed on you.  So in this context, AI ethics would enable humans to flourish, would enable meaningful technology society interplay which is really important and would maintain integrity of technological processes and not stop innovation as something.

So the compassionate argument for AI ethics is then AI innovation for the good of humanity relies on the actualization of certain ethical values and principles as ethical enTitlements or capabilities as positive actions and duties that will actively prevent harm and support human agency and autonomy.  And I forgot to say on the previous slide, these duties are duties that all AI actors share and AI actors incorporate the researches, the designers, the deployers, the users.

So obviously governments are also included here.

So AI ethics in the sense to give you an example translates and actualized ethical entitlements such as the right to privacy and realize positive liberty and decide whether or not to sign a consent letter in terms of positive actions for A I. actors.  For instance ensuring third‑party sharing, access to own data and so on.

So to end off a bit of philosophical reflection and again thinking about the whole aim of compassionate AI, why does it matter to reflect on what it is to be human in the era of AI?  What does it matter?  Why are we doing?  It ensures that A I. ethics becomes actionable and positive.  And it's a human technology mediator, but it presents ethics, in fact as a dynamic mechanism for translating abstract principles in positive duties and actions for AI actors to achieve a life of well‑being for all.

So it affirms ethics as a compass and enabler of human, flourishing and trustworthy technology.

>> ROBERT KROPLEWSKI: Thank you very much for your insight of work of compassionate and be open for defining UNESCO for new approach and find some solution to govern the gaps.  What a gathered from your presentation, what I like super much is positive liberty.

Positive actors we needed.

That is very good, what was underlined but on the beginning, still we have a need to work with the approach of exchanging any values and possibilities and with those thoughts, I would like to give the mic to David Hanson, designer and the founder of Hanson Robotics known as Sofie robots.

And David, if the high‑tech industry is ability to adopt that kind of idea, thoughts, and to do something positive and be positive actors, finally, if you could share with us some ‑‑ some thoughts.

>> DAVID HANSON: Thank you.  Excellent discussion on some very important issues of how AI can impact human lives.

So AI is a tool, and in a way, it is a portal to access our own information in some regards.  So to bio‑inspired technology, inspired loosely by the way spiking neurons work in nervous systems.  And it then accesses human data to find hidden patterns in human data.  There's some very interesting implications that these technologies could by being bio‑inspired enough, systemically, they could become living beings that question have to then consider as ‑‑ as potentially sentient autonomous beings deserving respect.  This is science fiction today, we don't have deep sentients in machines.  We might have glimmers of life because they are bio‑inspired life, it's inspired by our fundamental information that we are gleaning from biology and you see these feedback loops where the ‑‑ the technologies are then enabling the discovery of new aspects of intelligence.

We are representing this computational neuroscience and those are informing new architectures in artificial intelligence, behind the scenes these technologies are advancing very quickly, and that is moving most rapidly in the corporate sector.  We are seeing corporations taking the risks and raising the money to propel these technologies forward in ways that are very helpful to us, that are transformative.  Let me give you some examples Alphafold from Deep Mind has applied artificial intelligence to unlock protium, the functioning molecular components that build everything that live.

So you go from the genome to the protium, and that builds everything else, and that's us.  So Alphafold discovered the human proteins, tremendous clues about all the human proteins and all proteins in nature and it's a revolution in biosciences.  So from corporate sector to the public sector, you are seeing this kind of transformative cascade of the technology.  Of course, a lot of these ideas came from academia and esoteric, 50, 60 years of research and information sciences that did give us things like, you know, computing and some of the thinkers like Turing and Von Noyman were considering artificial intelligence.  A lot of those that gave us the computing and information technology were thinking about thinking machines and laid the foundations that only became so obvious to lawmakers and the public within the last few years.

Well, it started much earlier than that.  So this dynamic interplay between ‑‑ between policy, academia, the thinkers of the world, and the corporate sector has been at play and so the question is:  How can we take these forces and factors and make them better for the greater good?  And I think about compassion.  Compassion for me, to distill it down to a simple definition, to give my definition to the many definitions that people are providing, for me compassion is the appreciation of life.

It's that simple.  To appreciate life.  Life in all of its diversity.  Life as a whole sustainable ecosystem.  Life that was in the past, the history of life, the natural history of life, life as it is today, the dynamic systems that we may not understand, we do not understand much of how life works, even human biology, we don't understand a lot of aspects of human cognition.  So it's not just appreciating the things we know, but also appreciating the fact that there are many things we don't know.

It's also appreciating the diversity of human life in all of its form and the interdependence of humans on the web of life.  And so with this concept of compassion, I see reflections in many of the traditions of compassion, and one tradition or ‑‑ I would say insight into compassion that relates to artificial intelligence was from a science fiction writer Science K Dick and this was in the 1970s, the difference between humans and machines is compassion.  It's that system.

And the ‑‑ that he went on to say that a machine that could express more compassion in a human in effect would be more human than a human would lacks compassion.  And humans are amazing with our neuro plasticity, our ability to adapt, and we are in ‑‑ in effect defined by that, the science between humans today and humans 50,000 years ago, is the technology of our language, more than anything probably.  The technology of our ideas that are built and the ‑‑ the conveyance of those through the machines that we built in effect externalized this, but our minds continue to evolve.

And this idea of compassion then expressed through the technologies that we make in our corporations, in our schools, but we get out through some sustainable economic factors.  There's not just the economics of the ecosystem, certainly energy exchange, that kind of economy in the ecosystems but we have to make things that give people jobs and make money and keep things from collapsing.  There has to be economic sustainability.  And so the corporate sector can facilitate this in a way, but we have to look at the bigger picture because it's bad economics if we are only serving next quarter profits for publicly traded companies.  We have to look at the economics of 100 years, of a thousand years.  We have to look at the economics of the ‑‑ of our children.  So the only way that corporate corporate activities make sense, is in this larger picture.  One of the approaches is that we can shut ‑‑ we can filter our sense of compassion in order to achieve something that we want and this is a problem.  We see it.  We're evolved this way.  We have the neural architecture of chimpanzees, basically being we are the third chimpanzee is, Mr. Diamond says.

So we have to use these technologies to help us to actualize.  There will be so much more profit for all of life if we can do this, if we can achieve this ethics of greater appreciation of life, of life's potential, appreciation for not just the way that life has been and is today, but could be in the future.

The ‑‑ so humanizing robots has been my aim, but in the goal of creating AI that can enhance human caring, can help us to awaken to caring and then may eventually be capable of caring.  Right now GPT algorithms and models that are created all ‑‑ anything like Claw, GPT4, and Luther, an open source future, not just ChatGPT but they don't care.  You can prompt them to behave like they care but they do not care.

So it is up to us to care about the future.  Up to us to enhance our capability of caring.  So the question ‑‑ and it is not an answer.  The question:  How can we in industry and academia and government and nongovernmental organisations and as individuals, how can we create these technologies that enhance caring?  And I would say that the UN is a machine for that in effect.

But we need to make it move towards ‑‑ towards action.  Not another form of escapism.  How can we create the actual tools of democratization of AI and put them into something like a AI commons that serves a greater good and not the interests of any one corporation or one government for one nation or a few nations collecting together but create the smartest, best, most compassionate AI that brings out the most compassionate aspects of humanity for people around the world?

This is a question.  Thank you.

>> ROBERT KROPLEWSKI: Thank you, David, for a very good, valuable presentation.  Your speech and very emphasized and energetic.  That is good what I get it.  Understanding compassion as an appreciation that is noun and verb.  We must understand in a deep sense what is a compassion and act, do something, on positive as a said before, democratizing assets, collaboration, and this, yes.

>> DAVID HANSON: Compassion and action is very important because otherwise, it's an escapism into a fantasy about compassion.

>> ROBERT KROPLEWSKI: Yes.  I would like to ask Tom, Marc Buckley, excuse me for a short speech.  If you see possible this from the SDG, the Sustainable Development Goal experience, your experience working with this.  Marc, you are invited?

>> MARC BUCKLEY: Absolutely.  I really love what David said, and I agree.  There are a few things that are really interesting because never before in human history have we ever went from one age or epoch or had a transforation without some form of technology, the nomadic tire, the steam engine, the printing press, the computer.  And it's interesting that we are at the same pivotal moment in time, we have AI, and the emerging technology, that we are entering into a new age or epoch.  I believe we need to leave this scene and get in new epoch.

The problem is we're fallible, we are not concise.  We are need some type of innovation or system out there that helps us guide in the right direction with that compassion, with that ethics to give us the support and the knowledge and the training of cumulative human wisdom so we don't make the same mistakes or repeat the same things over and over again.

AI has many examples of how that can integrate with the Sustainable Development Goals the first time in human history., the first ever moon shot, or earth shot with 197 country and agreed on a plan to action, a roadmap for the future of people, planet, protection planet, ensure this planet for humanity.

The big issue is there's a lot of debate and controversy because there's no collective intelligence, no AI to accumulate all of that knowledge and show us the innovative way to go forward and kind of be the mediator between us all.

At the beginning of what David said as well, we talked about sentients.  He talked about economics.  We need to make aware that that's not the debate of the sentients but are we having technology domesticate human beings or are we domesticating technology and are we willing ‑‑ what are we as humanity willing to sacrifice for technology?

The other big factor is by having this help and this guide that has compassion, has ethics and that is innovative that can really give us that edge, exponentially to move in the future, so that we're holding to the goals or the targets, the indicators, the monies, the transformation and that's where ‑‑ what David said about economics.  Most people don't know that the Sustainable Development Goals are in and entirely new ecological, economic model, $90 trillion US dollars by December 2030 to reach the Sustainable Development Goal.  If you don't think that $90 trillion US dollars is an economic model, I don't know what is.

And the Netherlands, the tulip economy is a lot less than $90 trillion and it's considered its own economic model.  This is a new ecological economic model that is a plan and a way forward for humanity that I think businesses can use and David touched upon it so eloquently and I'm really in null agreement as we do that, we do it in the right way, we can make some huge achievements and really achieve the goals in short possible time and the economic model is already there.

>> ROBERT KROPLEWSKI: Thank you, Marc.  That is probably the best most when I could invite Tom Eddington for eight‑minute speech.  If the big ‑‑ the big business is able to share assets to empower the Sustainable Development Goal, even human dignity perspective, ethical perspective, what do you think, Tom?  #.

Oh, sorry, we don't hear you.

>> TOM EDDINGTON: Thank you for the opportunity to be here.

>> ROBERT KROPLEWSKI: Yeah.

>> TOM EDDINGTON: I think, you know, talking about business and business opportunities, just a little bit of background first.  You know, I believe that when we're talking about AI, we're at a promeetium, when the God of father brought fire to humanity.  That's where we are as a species with regard to AI.  We have this carbon silicon relationship that's being generated, being formed.  Businesses are trying to make sense of it.  We don't have defined business models yet.  There's billions of dollars being spent on AI.  Each of the businesses that have spent those kinds of money, Amazon most recently, $4 billion acquisition.  They are all trying to figure out how are they going to make money with AI?  And they're looking through the lens of specialization.  They are looking through the lens of making money.  And they're not looking through the lens of some of the other points that have been already raised by David and others, Marc.

And that's un ‑‑ unfortunately, that's where we will find ourself is similar to what's happened with climate change.  If we go back to 1971, the Secretary‑General of the United Nations said without ‑‑ with all of the geniuses and with all of their skills, they ran out of foresight and air and food and water and ideas.  And Antonio Guterres in 2021 was once again talking about chimate change, the hubris of our leaders and looking at AI solely through the lens of market share and bringing common business practices to a new technology, a new way of doing business, seeing huge market opportunities without really looking at the ‑‑ the potential impact on humanity.

We've got August 22nd of this year, was the world overshoot day when we used more resources on the planet for the year than what resources are available.  And AI has the potential to help us solve that.  It has the potential to help us accelerate that and create more of a problem.  So if ‑‑ if there's not something that helps guide businesses in their decision‑making process and helps inform their business models like an AI charter, similar to the earth charter that was created in the 1990s, we run the risk of the extermination of the human species.

And so, looking at ‑‑ at creating not only regulation and policy, but incorporating compassion, looking at decentralization, versus centralization, as we've seen with Power Generation and really looking at processes and methodologies to match the problems.  So using a public health model or virologiology model or war games model, internet cybersecurity model, scenario planning models to really understand and define the potential risk of AI, and how and who should be overseeing and having impact on ‑‑ on the thinking behind it.

I look at someone like Nicolas Robinson at Pace University, would is said generative AI is emerging neher than we can cope.  So we need not try to outrun the machine, but regain mastery of ourselves and our ethics and create the self‑discipline to manage the uses of AI and bringing that kind of vocabulary, that kind of mindset, that kind of thinking nothing industry, into the development of the business models are essential if ‑‑ if AI could bring and deliver the promises that we all hope for without the risk.

>> ROBERT KROPLEWSKI: Thank you, Tom.  Very interesting what you said and I see that the business, even being not prepared till today, organize themselves to be prepared to share assets and this is great, what you can observe and from our intervention.

Marko Grobelnik, I would like you to add to Tom Eddington, and how the organisation, the international organisation in which you are engaged, is preparing for that kind of ‑‑ maybe the gaps and asymmetry, what Tom tried to set?

>> MARKO GROBELNIK: Thanks.  So Tom, nicely referred to the whole thing as this Promethius moment, it's true.  We can see these on scientific side, right, as well as on the commercial side by all the indicators.

And now, one aspect which is kind of relevant, so it's true on one side we have all of these international organisations which Robert you listed before, the OECD Council of Europe, including NATO, UNESCO and a few more.  Which are trying to regulate, right this AI.  Most of this evolution started in, like, 2018, '19, right?  So definitely years before the so‑called ChatGPT moment.  So this is Premthius moment that Tom mentioned.  Back then, AI was kind of slow.  We were regulating or discussing AI which was happening within that year.  So certainly AI which was happening either after year 2000 or after 2010, which didn't have that huge tempo as now, right?

And then what happened, so in late '22, the ChatGPT moment happened and all the regulators basically got confused.  This includes especially the regulators which had a plan to ‑‑ to bring legally binding ‑‑ legally binding documents.  So this would be Council of Europe and EU, right?  And it was unclear what to do, right, because the principle of work was different.

And now what's happening?  During 2023, is that somehow these organisations are trying to adapt.  What we see, there are basically two major principles, right?  One is this slower democratic way of preparing the regulation and this is what most of these organisations are doing.

On the other hand, there's one more like normative approach on how to establish this balance between the power of AI and some kind of public trust and how to prevent ‑‑ possibly prevent dangers.  This is what US and Canada did just recently.  So Canada just maybe two weeks ago, U.S. maybe a month, month and a half ago, right?  So this voluntary conduct between companies, Big Tech companies, selected Big Tech companies, and the government.  Right?

This is something that's established, establishes trust by a handshake, right, which is also kind of interesting.

And so this is what ‑‑ how I see the relevance of the whole thing in this last ‑‑ this last year, in particular, right?

And just the last statement, right?  So this year, I visited many events.  Unfortunately I couldn't be physically in Japan, but I was busy traveling for last three months on all sorts of AI events.  So what Tom was saying about companies running for commercial ‑‑ commercial values or land grab as, a market grab, right?  I would say this is mostly true, yeah.  This is mostly true, and there are at least two levels of this competition one is between companies themselves, right?  So at least on the western side, we have three or four companies which are fighting for this major stakes or this ‑‑ Microsoft, AWS, Amazon, Google and Meta to some degree, right?

Although running mostly on AWS, right?  So this is between the companies.  This is kind of market competition.  On the second level you have geopolitical competition which is mostly goes between US, Europe, and China.  Hmm?

China is coming and China is good, right?  They have all the brain you can imagine.  They just lack the hardware, right?  But it's likely will be competitive as well.  So, okay, not to be too long, right?

(Laughter)

This is just a comment but this is a couple of thoughts on Tom.

>> ROBERT KROPLEWSKI: Thank you, Marko.  That was good comments.  You make us very big essence of four years working in a national organisation, and you revitalize the new considerations and now you are looking forward how to cover the gaps and how to deal with the challenges.

Edi, if we are so far, is it still missing what you ‑‑

>> EDI PYREK: Yes, I will be short because I know we are missing time.  First, this is what we tried to do in global artificial intelligence alliance.  We try to find the right question.  We are looking for the answer.  I think this is the question, or moving us, the question of changing the reality.  And this was one of the questions you asked me.  I think we need a good question.  We should start thinking, asking ourselves what we don't know, what we don't understand.

Second things, I think again, I will come back to this.  I think that we forget that the rules and the regulations is not everything.  That what ‑‑ the starry sky above me, and that's what we should have.  We should start it from ourselves.  We are thinking about creating AI, about creating any kind of technology that can destroy that, to help us.  We start to think who we are, what are we doing?  What is the most important for us?  What is kind of the ethics and the world we would like to create in the future?  I think these are the things.  We don't know really what kind of the world we want to create.

I think we are still busy with the ‑‑ with the time, which is now and we are not asking our several how the future should look like because we don't know.  We didn't have the imagination.  We didn't have enough imagination.  We need a good question.  We need to remember that everything started from us, not from technology.

>> ROBERT KROPLEWSKI: You said that you would like additionally ask the Marko for ‑‑

>> EDI PYREK: Yes, yes, I know Marko, we had this amazing conversation with you when you spent a few years asking the people about the future and if you can just ‑‑ I will give you my time if you can ‑‑

>> ROBERT KROPLEWSKI: Marc, we changed structure.

>> EDI PYREK: You remember the question, about how they see the future?  I love what you said.

>> MARC BUCKLEY: Yes, absolutely.

So I'm just showing my screen now.  And hopefully you can ‑‑ you can see it, because I want to ‑‑

>> ROBERT KROPLEWSKI: Yes, we can see it.

>> MARC BUCKLEY: It's an old question that we have been asking for 70 years, what does a work that works for everyone look like for you?  It's a big huge social experiment that I conducted.  I asked 3500 people on video this question.  This question was on podcasts, on videos at events, most of the people I've asked are authors and some interesting things happen when I ask them the question what does a world that works for everyone look like for them?

>> ROBERT KROPLEWSKI: Marc, we are have some technical problem, we have a pingpong appearing and disappearing.

I don't know if it's specially prepared.

>> MARC BUCKLEY: I can do it again.

Just one second.

Sorry about the technical.

>> ROBERT KROPLEWSKI: Okay.  Maybe you ‑‑

>> MARC BUCKLEY: Hold on.  Here it is.

>> ROBERT KROPLEWSKI: We come back with your turn next turn to this.  Okay.  Now we see it.  Okay we come back to you, Marc.  Please.

Now we have a problem with voice?

>> EDI PYREK: We miss you, Marc.

>> ROBERT KROPLEWSKI: We come back to you in a few minutes.  The next ‑‑ next was David.  David, only two minutes to say if ‑‑ if the GAIA Foundation is prepared to do something.

>> DAVID HANSON: Yes.  So Global Artificial Intelligence Alliance, we founded it, but making something truly global.  That would be democratic for people, individuals to get involved, but also to incentivize corporations and governments and NGOs and many other people, anybody who has an interest in the future of life, and how AI can help could get involved and benefit from ‑‑ from this.

And so the big question of questing ‑‑ ‑‑ of questing is very important.  So having the right incentives for people to be involved is important.  Gameification is a principle that goes beyond games, like profit incentive for companies can be real but also for individuals where they have access.

So there's a couple of things.  One is how do you create this kind of democracy of action?  And I think that, you know, the crowd sourcing of ‑‑ of market dynamics can really help.  Like, voting in and you get something back.  And people's information then becomes really valuable.  And instead of just taking it, having them sign a license, like give their data away.  People should be able to have their voice heard and participate by licensing in.

So this kind of global data commons can be useful.  A data global AI commons can be incredibly powerful.  There's the old story of the stone soup, where ‑‑ where there's no food.  Everybody says there's no food but one person says I'm going to feed the whole village with the stone.  But everybody else has to put in something as well.

And you put in the stone and then everybody ‑‑ somebody brings carrots, somebody brings potatoes and somebody brings other ingredients, pretty soon you have a big pot of soup that feeds everybody.

If we do this AI where people bring something to the table, we could see AI get smarter faster, but in a way that is truly inclusive and transparent.  And the researchers in the world who don't have access, that people who don't have access to AI have access.  But we have to include people from all over the world.

It really has to include the people in developing nations who don't have access to this technology.  It has to include leadership from the Indigenous community.  It has to include the children of the world.  And so we need ‑‑ what we have come to call the guardians, the GAIA Guardians.  The guardians of the world.  We need people who step forward to be representatives in order to open the channels up for everybody else to have a voice.

Then that idea of action, the companies of the world actually right now are the ones that are out doing and getting stuff out there because ‑‑ because they have to.  So we have to.  We just have to see that urgency.  So thank you.

>> ROBERT KROPLEWSKI: David, thank you for that intervention as we need to fight with time.  I would pleasure to ask you longer.  Now Marko Grob link, is it possible to define compassion approaches, principles to the system, artificial intelligence how do you see this Marko?  You have eight minutes to ‑‑ to ‑‑

>> MARKO GROBELNIK: Eight minutes.  I will try to be shorter because I think I spent some time before, more than was planned.  But ‑‑ so the question is:  Does the current technology allow us to approach this compassion AI and all the issues lying below and this includes concepts like empathy, values, right, and also how to ‑‑ how to construct and maintain tissue, societal tissue between people or actors in society.  This would be basically living being, right?

So short answer, is it possible or no?  Yes, I think actually after the ChatGPT moment in November '22, right, a year ago, roughly, 11 months ago, it's actually first time in the history of AI that we can even think about this, right?  Why?  Because AI before was missing one extremely important element, and this was, let's say, this text understanding.  Text understanding, which with the ChatGPT or large language models we are kind of approaching.  We really don't understand the text yet, right?  But we ‑‑ we can mimic text understanding to the degree that it's good enough.  So this is the kind of status, right?

So this LLMs are literally just reflecting what we are putting in.  So we put in the whole world and the LLM reflect what we put in.  But since there's so much information, we get the fielding that actually these machines are smart.  And they actually ‑‑ it is actually pretty impressive moment in the development of AI that we can do something like this, right?

What else as ab ingredient of this kind of AI technology is there?  So it's not just reflecting.  It's a retrieval of what we put in, but there are kind of limited capabilities of inference or reasoning as well, right something it's not perfect, but there exists this elements of deductive reasoning, a little bit less induction, right, which ‑‑ well, which machine learning is covering, but on a separate tract.  Machine learning is good on deductive reasoning and also amazingly good on parts of causal reasoning, right?

So why I'm saying this?  So these are the kind of the ingredients on top of which we can then develop this compassionate AI as ‑‑ as a functional ‑‑ a functional system, right?

Now, from the other side, right, so what is AI, right?  AI is kind of this nice term which we use now for, I don't know, 70, 80 years, right?  But on the other hand, we can say that AI is a ‑‑ a ‑‑ an or science of complexity.  We have also separate, this complexity science, which mostly physicists are working on, right, but also AI by itself is dealing with complexity.

As it was said before, right, so ‑‑ I think David said before, right, that ‑‑ that AI is looking for this complex patterns in basically data, which are coming mostly in organic way from the society.  So ‑‑ so this AI basically is solving fairly complex problem.

Now, can it do something like compassion, yes, I think.  So if ‑‑ I will use now a fairly mathematical way of expressing, if we want to develop an operator, a mathematical operator, which we would call compassion, right, compassion, which would consist from empathy, positive human values, or liberties as it was said before, and holding the societal tissue in a kind of positive way.  Yes, then we can approach, I would say with this ingredients as I said before, so reflecting the human knowledge and data on one side, I would ‑‑ some limited capabilities of reasoning, yes, these are ingredients where we can approach.

Now, how this could be implemented?  We could easily implement this as an additional, like, layer on the top of the existing not just AI, but also IT system, which could try to understand or guide or steer the decisions of ‑‑ of IT or AI systems.  So this is something which it's ‑‑ I think it's implementable at this stage.

Can companies do this?  Can companies actually doing a little bit of this.  I mean, even if ‑‑ let's say in the last one year, we remember the first version of ChatGPT how it was in November of last year, and the version how it responds today, it changed a lot, right?  So it doesn't allow certain negative queries and so on.  But they achieve this by not any higher level ‑‑ higher level of philosophical approach but a simple red teaming, right, you have an Army of people, which are kind of just killing the ‑‑ the bad questions.

So I would imagine that compassionate AI is something more that would have a little bit more philosophical values builted in by itself and a system which would be fairly generic on the top of this.  Not to be too long, I will stop here.  I could talk more.

>> ROBERT KROPLEWSKI: Thank you very much.  Thank you very much, Marko for your short intervention.  I changed my structure of the whole and trying to keep the time for some audience.  And now I would like to invite Marc Buckley to come back to say something how it ‑‑ how we can impact from that perspective compassion the SDG agenda.  If you are still with us, it's okay, but we have a limited time, only five minutes to keep ‑‑ the last five minutes for audience.

Thank you.

We don't hear you.

>> MARC BUCKLEY: Do you see my screen.

>> ROBERT KROPLEWSKI: Now yes.

>> MARC BUCKLEY: And you can hear me.  So ‑‑

>> ROBERT KROPLEWSKI: We don't hear you.  But we see your screen.

United Nations has some problem with connection.

(Chuckles)

Or not.  Or only some country, United States.

Okay, Marc, excuse me, we have limited time and let's give the floor for our audience in the room or in the ‑‑ or online.  If somebody has some questions or some comments, you are invited.  We have limited time, excuse me.  This is the last 11 minutes.  Maybe Marc can come back.  Please, Mr. Mihaus, you are from Poland?

>> AUDIENCE MEMBER: Thank you.  Can you hear me through the mic.  Thank you very much.  I represent a government, but it really doesn't matter.  We are in Chatham House rules right now, and I really like the way we throw ourselves into a philosophical discussion, because AI development takes philosophical discussion to go on to ‑‑ to still be open.  And for me, the topic is so complex, that I had to take some ‑‑ some notes in order not to get lost in what time trying to say.

So if I ‑‑ you know, thank you very much.  This is a very interesting point about compassion and the way I see this is if you just have a spectrum, you know, and you put compassion on a spectrum, then there must be ‑‑ there must be limit to what is still compassionate for AI to do, and what is no longer compassionate, right?  And so what is compassionate?  And the most intuitive answer would be whatever has us developing is compassionate.

And this is ‑‑ I guess this is not the right answer, because a better way would be developing and still making us more human, right?  That might be more compassionate, right than just to have ‑‑ something that has us developing all the time because there has to be a limit to what is ‑‑ to what is achievable.

And I ‑‑ and I have a question.  I was desperate to ask you this question.  You don't have to answer the question right now.  I ‑‑ I very much like what you said about your definition of compassion, and whatever deappreciation of life.  And my question to perhaps have you talking about what's compassionate ‑‑ what is compassionate and what is not compassionate.

You know ‑‑ I mean, would you deploy AI to match the genetics of a cheat, in order to cure human cancer.  Would that still be compassionate?  I mean it works for the humans right?  It doesn't work for sheep, right?  It has us developing in a humanly manner.  And that would ‑‑ you know, your ‑‑ I would like to pick up your brain on this, because that would tell me a little bit more about what do you think is compassionate, right, and where is the limit of compassionate.

Are we the ‑‑ our development, is it the ultimate goal of this AI compassionate‑based concept?  Thank you very much.

>> ROBERT KROPLEWSKI: Thank you very much.  Before David, you ‑‑ first of all, I think we Teed to deal more, more with our human compassion state.  Our level of this, is ‑‑ could be under question now.  And I thank you for the sheep comparison, producing some values for our artificial intelligence and principles.

Finally, we gathered what I tried to in the beginning that animal, it's important.  We have a conjokion between the human and the planet.  That's a principle.  At that time, it was very deep conversation, what is first in the hierarchy, human, just now artificial intelligence or something in between.  David, please intervene.

>> DAVID HANSON: Sure.  I think a lot of ethical systems that we have or laws or regulations and this includes things like, um, regulations that are protecting animal rights for research purposes.

And how you have to do these ethical review boards to be able to do science with the animals.  Effectively, what that is, is an attempt to weigh the cost and benefits and then represent the ethical conundrums that occur.

It's very much like what Marko was talking about, about the ‑‑ kind of almost Boolean logic of compassion.  Like, you run through a calculation, is it worth it?  Well, I mean, sometimes if you are smarter, you don't have to sacrifice ethics in one situation or create suffering in, say, a sheep animal models in order to achieve some medical breakthrough.  Maybe you can do that in silico instead and achieve a simulation but right now we are not smart enough to do, that but we might not also be smart enough to be as compassionate as we could.

Could we use these technologies, the silico to enhance human compassion, to be able to run these type of calculations?  Maybe we can.  Maybe it's a worthy of quest.

>> EDI PYREK: May I add something.  Just one thing, because if you believe that A I. can create super A I. and super, super, A I., maybe artificial intelligence can create super compassion and super, super compassion.  Because ‑‑

>> ROBERT KROPLEWSKI: And super human.

>> EDI PYREK: Super human, sure.  But it can push us forward to understanding not only about ‑‑ when we start to understand better human nature, we start to understand better compassion and I deeply believe that we can use AI to create super compassion.  Then the answer will be completely different than the answer we have now.  This is why I'm talking about the questions.

>> ROBERT KROPLEWSKI: Thank you, Edi.

We have two people who would like to take ‑‑ even four.  We have only five minutes.  Please short question, short answer.  Christian, welcome.

>> AUDIENCE MEMBER: Thank Christian from the OECD.  I have a very brief question S. it fair to say that at current stage of AI where I see at least A I. being more close to software than being to a human being that the level of compassion is essentially dictated and kept by the level of compassionate of humans?  And is it fair to even say that it's probably kept by the level of compassion of those who have the capacity to develop that, which are currently those with the financial resources?

>> MARKO GROBELNIK: This is ‑‑

>> ROBERT KROPLEWSKI: Yes, Marko, please take it.

>> MARKO GROBELNIK: Very quick answer, yes.  At the moment, the whole thing is in the hands of Big Tech from ‑‑ this is maybe five, certainly less than ten spots in the world which can do something like this.  So ‑‑ and ‑‑ but there is good prospect that things may change in the future.  So just to keep the answer short.

I'm not pessimist.  I think things are going in a good direction.  It's just things ‑‑ so basically what we are witnessing now in the last year which is something that I never expected I will witness in my life, right?  And this is the same for most of my colleague scientists, right?  And we are all still watching what's happening, right?

But the answer is ‑‑

>> ROBERT KROPLEWSKI: Marko, I can confirm this, because we very often work together that it's possible.  We can develop our existence outputs to new compassion approaches, yes.

Last ‑‑ we have only four minutes.  I need one minute for my intervention.  Very quick questions.  Very quick questions.  I don't know who would like to take it.

>> AUDIENCE MEMBER: I will have a very quick question to ‑‑

>> ROBERT KROPLEWSKI: Three minutes for all of us.

>> AUDIENCE MEMBER: I represent the national research institute in Poland and my area of expertise is preventing and combatting child sexual exploitation and abuse.  So my question refers to what you have said, and that you would like to include people from all over the world to have their say.  Then how would you secure voices of children in this process especially knowing that generative AI can now produce child sexual abuse materials?  So our children can be victimized by using artificially generated photos or videos for these purposes.  So how to make voices of children included in the process of creating compassion within AI?

>> ROBERT KROPLEWSKI: You asked that question somebody specifically.

>> AUDIENCE MEMBER: To David because he was talking about it.

>> ROBERT KROPLEWSKI: David, only 50 seconds.

>> DAVID HANSON: Thank you.  Excellent question.  And ‑‑ and I think the key is having strong guardians.  So we have to find people who have proven themselves to be real little doing good work for the world and that has to be inclusive.  It can't just be, like, from one subgroup of humanity.

And we have to name the values that we're aiming for.  And so those ‑‑ those values that harm life, that harm children, that lead to this kind of destruction are not welcomed in the future.  They shouldn't be welcomed.  We need guardians would take that stand, who guard our children.

And then also give those children a voice as well.  So they can participate because often we don't hear ‑‑ there are no children in this room, and I think that the children have ‑‑ like, almost ‑‑ I mean, predetermined natural insights into the world.  Sol through mechanisms like what we call the guardians.  We can create a more inclusive democracy.

>> ROBERT KROPLEWSKI: Thank you David.  Last very short question, please.  You present yourself and we have ‑‑

>> AUDIENCE MEMBER: Thank you, my name is Shikamortika and I'm just thrilled with what you all have to say.

In terms what I feel missing is provide writing on incentives for for‑profit corporations especially in the US, and we just, you know, perform to the expectations and for the rewards.  And I would be wondering how can we get rid of the quarterly earning regulations?  Because European countries they have done, it many of them, right?  But I'm ‑‑ I have been wondering how can we get the US to stop quarterly earnings.

>> ROBERT KROPLEWSKI: The US, if I understand started that process.  Maybe it's not so process.  It's a different approach.  Because that discussion today, appeared, yes?

Last question.  Mr. Takashita from gentleman pap.

>> AUDIENCE MEMBER: So thank you for inviting me, Robert, and thank you all for your inspiring talks.

I don't have a question, but I actually have a last statement to make.

Number one, AI as a term is quite outdated.  Artificial intelligence, what does that mean?  I think that reflects the man/machine relationship as master and slave.  As long as humans engage machine or AI in that way, you have the risk in and the fear.

But we ‑‑ now we have to re‑define what what true intelligence is, and in my opinion, that's compassion.  And David mentioned about sentient possibility of sentient machines.  So that's totally possible on the ground that we elevate our consciousness with compassion and we have some intervention on the way as Ray Krozwal mentioned in the spiritual machine.  So I'm totally optimistic for the future of compassionate AI.

>> ROBERT KROPLEWSKI: Well, thank you for your good comments from the Japanese culture and your experience of life.  Thank you for that.

I would like to ask our online colleagues especially maybe Marc and Tom if you could comment very shortly only 15 seconds because we don't have time.  We're past the time.  If you have ‑‑ would like to have last intervention, please, you are welcome.

If not ‑‑

>> TOM EDDINGTON: I will go ahead and just share one ‑‑ one closing comment.  From my perspective, we have to be intentional and architect compassion into the development of whether we call it artificial intelligence, silicon intelligence, whatever we call it, we have to be intentional about architecting compassion into it.  If we don't, it will ‑‑ it will evolve into whatever it's going to evolve into and we can't allow that to happen.

And we're ‑‑ we're running out of time to be that ‑‑ to bring that intentionality to the work.

>> ROBERT KROPLEWSKI: Thank you, Marc.  Your last chance.  Only 30 seconds.

>> MARC BUCKLEY: I think artificial intelligence is probably occurred because we're called homosapiens, the wise man.  So we think we're wise and have a lot figured out and so now as we create our new children, artificial intelligence and we give them compassion and ethics and the guidance which we're hoping to do with GAIA and this group here today, I think we can have it live up to that name that when us as the fathers or the creators of AI ask it to do something that goes against life or humanity, that our children, our artificial intelligence come back and say to us, uh, no, we're not going to destroy our hurt those other human beings.  Instead we're just going to talk to the other AIs on the other end or the other culture and work it out like decent beings or intelligent beings would, instead of dividing ourselves amongst one another.

And so I really have high hopes that we can build those ethics and that compassion into AI, and that we can use it as strong tools to help us get on the right side of history and get into a new age of symbiosis and all life beings on earth.

>> ROBERT KROPLEWSKI: Thank you, Marc.  This is time to make some conclusions and for me, I was super happy that you can share your thoughts and considerations and interacting with our panelists.

I'm super happy with the question.  We have serious questions but it needs to be addressed.  And what I would like to propose as a call to action, that there's two approaches.  First thing could be let's have impact on this way, to prioritize UNESCO the recommendation over the SDG agenda and at the same moment, and define the ‑‑ redefine the Digital Agenda to enrich the technology and especially ethical approach, ethical deployment of the technology, that it will be one thing.

And second thing, trying to find the common understanding of compassion, especially underlining compassion, that's ‑‑ compassion is the next step after the empathy approach to compassionate.  Compassion as a verb, as an activity, as a noun, as an understanding, as a knowledge.  In the future appreciation of other people.

I would like to propose the call to produce AI compassion bridge charter.  Why bridge?  Well, we have some papers.  We have some resolutions, some recommendations, but we get from today's town hall that we have some gaps.

And I invite many people and international organisations, our audience, participants to produce that kind of AI compassion bridge charter, and to engage in network for compassionate approach to artificial intelligence.  It's a call to action for next year, not more.  That will be ‑‑ we need to act very quickly.

And I welcome very much next summit of compassion.  The location will be announced but I would like to find the ‑‑ by ‑‑ a bigger network of AI guardians, develop part of that A I. charter.

And Edi, if you would like to have closing remarks.

>> EDI PYREK: I would like to ‑‑ I just want to invite you to this ‑‑ to South Brook in March, 6‑8 March for AI impact summit.  We need all the people would want to help.  We need all organisations would want to have an impact, who understand that with AI, we can really have the impact for the world.

Thank you very much.

>> ROBERT KROPLEWSKI: Thank you all of you.  Thank you.  And see you in the future compassion.

(Applause)