IGF 2021 – Day 3 – Launch / Award Event #55 “Human Rights-Based Data-Based Systems”

The following are the outputs of the captioning taken during an IGF virtual intervention. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid, but should not be treated as an authoritative record.

***

 

>> We all live in a digital world.  We all need it to be open and safe.  We all want to trust. 

>> And to be trusted. 

>> We all despise control. 

>> And desire freedom. 

>> We are all united. 

>> AARON BUTLER: Hello.  Can everyone hear me?  Hello?  I just want to make sure I can be heard before I begin. 

>> Yes. 

>> AARON BUTLER: Okay, great.  Thank you.  So welcome, everyone, and good afternoon.  My name is Aaron Butler.  And welcome to the session entitled "Human Rights‑based Database Systems."   I will be speaking along with my co‑speaker, Evelyne Tauchnitz, who as I've been informed, is running late for this session due to waiting for results ‑‑ test results for a COVID test, as I understand it.  She should be here shortly.  I shall go ahead and begin without her.  And I thank you, everyone, for your patience. 

          So today's presentation will be based on a book written by Professor Dr. Kirchschlaeger from University of Lucerne, who unfortunately cannot be with us today.  And so Dr. Tauchnitz and I are speaking in his stead.  The title of the book is "Digital Transformation and Ethics," ethical considerations on the robotization and automation of society and the economy and the use of artificial intelligence.  From this book we'll be presenting two sections.  I will take one, and Dr. Tauchnitz will take the other.  But first let's begin with ‑‑ and then we will move on to a discussion.  But first let's begin with getting a greater understanding of the overall problem horizon of the book.  The book focuses on database systems in artificial intelligence, and as such, it deals with the following problem horizon.  Given the intertwinement of these two technologies with human affairs and endeavors, we are at a critical phase in which digital transformation and its effect on human affairs and endeavors can be pursued with greater fervor, can be taken more seriously, in order to address the various ethical chances and risks.  As such, the research question of the entire book is as follows: How can we best navigate the intertwinement of digital transformation and human affairs and endeavors so as to better protect and promote human dignity, freedom, and well‑being in the age of the 4th industrial revolution? 

          Accordingly, the aim of the book is to consider, from an ethical perspective, the need and possibility of human rights as an integral feature of the intertwinement of digital transformation and human affairs and endeavors.  As I mentioned, we will be focusing on two sections.  The first is human rights‑based data‑based systems.  And the second is the international data‑based systems agency.  And as I understand it, as we had planned before, Dr. Tauchnitz will be presenting on the first section.  I don't believe she is with us yet.  Let me just check for a moment.  No, she is not with us yet.  So I shall proceed in presenting this section, then.  In her stead.  So we begin with understanding what database is.  A database is an organized collection of structured information or data, typically stored and accessed electronically from a computer system.  Now, that's the standard definition that one finds from organizations such as Oracle.  You can also find the same thing on the IBM website as well as other the websites of other companies.  It's a standard definition for a database system. 

          Now, as such, we don't want to just address what a data‑based system is or the nature of a data‑based system as such.  We want to address it from an ethical perspective.  And so we'll be starting with addressing the nature of AI, and in an attempt to disentangle that and understanding of that from an ethical perspective from that of a data‑based system, also from an ethical perspective.  So the term AI is questionable, first of all.  One, we don't have a complete understanding of what intelligence is, and the systems that we design based on our understanding to date has certain cognitive capabilities which are limited.  And one limitation is the lack of moral capability.  Another way to understand the limitations of artificial intelligence in its current form as opposed to its idealized forms would be many existing artificial intelligence systems today can solve problems across ‑‑ cannot solve problems across novel domains.  They're designed specifically to solve problems within one restricted domain.  But the general ability to address novelty across multiple domains, which is something that humans do all the time, is not an ability possessed by such systems. 

          Now, there has been a lot of talk recently about trust in AI and trustworthiness of such systems.  But from an ethical perspective, these sorts of systems, these sorts of machines, cannot be considered to be trustworthy.  That's a property that only humans have.  To be trustworthy implies the ability to betray the person that is trusting the trustee.  And such systems due to, one, their lack of general intelligence, and their restricted capacities, do not have, of course, do not have the possibility to betray their users.  Except, of course, in science fiction scenarios, but not in the real world. 

          So one thing that an AI can do, however, is it can solve problems within a restricted domain, and it can do that actually very well.  The current systems that we have can produce ‑‑ can predict certain behaviors, can predict certain outcomes, of course, within a restricted domain.  For example, they can help us with natural language programming problems.  Of course, many of you probably realize that in using software online, Word software such as Microsoft Word and the like.  And such systems are very good at that.  They can also track information and track data even in the natural environment, tracking the topography of a terrain and these sorts of things.  Now, humans use data‑based systems to support us in making decisions.  And such systems, but we need to be empowered to think critically about the usage of such systems.  Excuse me for a moment, everyone.  Dr. Tauchnitz is attempting to enter the forum.  I'm getting a message from her.  Excuse me. 

          Thank you, everyone, for your patience.  Moving on, I am waiting for an update from Dr. Tauchnitz.  So human rights‑based data‑based systems, as opposed to just simply data‑based systems are systems that would ideally take human rights into account in their ethically aligned design.  As such ‑‑ and their importance is as follows.  They would have a baseline minimum standard of promoting and protecting human dignity.  This with would include rights to privacy and protection from abuse.  And they would empower free speech, ideally.  And they would, as a minimum, taking human rights seriously as a minimum standard, ethical standard, would hold governments and other institutions accountable for operating below that minimum standard. 

          One of the advantages that is offered by such systems is that they can help to, insofar as they take human rights seriously, they can help to determine the content of our legal norms, and they have universal applicability under the various circumstances in which systems would be used.  And they are recognized globally as a sort of ethical standard that we are to follow. 

          Now, moving on to address the following question.  Which human rights are concerned when it comes to digital technologies?  Specifically it's the right to privacy and data protection, the right to be forgotten, freedom of expression, freedom of the press, rights of persons with disabilities, gender rights online, and children's rights.  And, of course, these are the same rights that would be protected offline as well.  So one of the benefits of having human rights‑based data‑based systems is that they can create uniformity of taking seriously, promoting and protecting human rights both offline and online. 

          Some examples that Dr. Tauchnitz has prepared for us for her portion of the presentation is exclusion of the possibility that humans should be able to sell themselves and their data as well as their privacy as products.  Of course, this would restrict the nature of data ownership.  But in doing so, it would make the very practice of this, a relevant practice, more in line with human rights standards.  Another example would be avoidance of critical system vulnerabilities that allow surveillance of individuals.  That's, of course, been in the news as a peculiar ‑‑ as a peculiar problem over the last decade. 

          This would include creating videoconferencing software that would have less vulnerabilities to be used as a way of spying on people.  For example, this would include having such software that would not, for example, have back doors to certain agencies that would be concerned with spying on the populous.  Moreover, another example would be a search for a profitable business model, or models, that would not violate human rights.  That is to say that we take seriously human rights as a minimum ethical standard. 

          Some key takeaway points to keep in mind is that this approach, as recommended by Professor Dr. Kirchschlaeger is a cautionary approach.  He is not presuming to sort of reinvent the wheel, if you will.  But as was mentioned at the outset of the discussion thus far, this is an opportunity to take human rights more seriously, and specifically what that means is attempting to provide recommendations to embed human rights and human rights practices more deeply at different levels of the development of the design, development, and deployment of human ‑‑ of data‑based systems. 

          This would include promoting algorithms that are less biased, that insofar as they take human rights as a minimum standard, do not contain certain biases that disadvantage vulnerable groups.  Which has also been a problem over the last decade.  We move now to the portion of the talk that I'll be presenting, that we originally planned.  That is to say talking about the International Data‑based Systems Agency.  And afterwards I would like to open up the floor for a discussion, a critical discussion of these matters is very important, especially at such a venue like the Internet Governance Forum.  And I'm looking forward to getting to that portion of the presentation so that I can ‑‑ we can all discuss this together, and I can hear feedback from the audience as to your thoughts concerning these matters. 

          Before we proceed, let me just check to make sure ‑‑ ah.  Professor Tauchnitz is in.  Dr. Tauchnitz, are you there? 

>> EVELYNE TAUCHNITZ: Yes, Aaron.  Finally I arrived.  Thank you. 

>> AARON BUTLER: Great.  We just finished your section and I'm moving on to my own. 

>> EVELYNE TAUCHNITZ: Okay.  Good. 

>> AARON BUTLER: Okay.  And then after that we'll open up the floor for a critical discussion.  It's great to have you here. 

>> EVELYNE TAUCHNITZ: Thank you.  I'm looking forward to the discussion.  Thank you.  And if there are any questions, of course, to my part, I'm happy to address them during the ‑‑ either online or also with the participants being here in Katowice.  Thank you. 

>> AARON BUTLER: So moving on to the second part of the presentation.  The International Data‑based Systems Agency.  So what and why is this important?  The agency in question, in its conception, is considered to be a global supervisory and monitoring institution in the area of data‑based systems.  And its purpose is to ensure safe, secure, and peaceful uses of data‑based systems, to contribute to international peace and security by, again, adhering to human rights as a minimum standard, and it is also designed to promote respect for human rights.  And to be in line with, that is to say consistent with, and to promote the U.N. Sustainable Development Goals. 

          But how would this work?  The agency in its conception is designed to implement 30 principles of application.  So these are applied principles.  And they serve a necessary function of enforcement of increased and stricter commitment to the legal framework, of course, as consistent with human rights as a minimum ethical standard, and they provide a means of strengthening regulation that is more precise and goal‑oriented, and most importantly, from an ethical perspective. 

          This last point is rather important.  As we know that ‑‑ and as history can attest to ‑‑ the content of legal norms, in their ability to protect and promote human rights, need to be determined from an ethical perspective.  Of course, legal norms can be determined from any perspective.  And that's partly the problem.  The problem is that we need to fix it so that it's being ‑‑ its content is being determined from an ethical perspective.  And one of the best ways to do that is to take as germane to that activity human rights as a minimum ethical standard. 

          So ‑‑ but these 30 principles of application are themselves grounded in certain first principles from this ethical perspective to which I'm referring.  And these can be grouped into five principles.  So human rights is a minimum ethical standard which I've mentioned several times.  And cannot be repeated enough.  That's very, very important.  Explainability, fulfilling ethically viable social requirements, and I'll explain that in a moment.  When we consider an example.  Critical safety considerations, of course, and the indivisibility of these principles.  That is to say a way of blocking, picking and choosing which principles one of human rights that one wants to take seriously.  They must be considered from a holistic perspective, that is. 

          And a human rights compliant conception of justice.  For example, one of which that would make it unjust to apply human rights to some group and not another group.  So let's take an example.  We'll consider three principles.  I have paraphrased these for the presentation.  And what I'm going to try to do here is to show how these principles are founded in ‑‑ how these principles of application are founded in the first principles of an ethical perspective, to whence I just referred.  To which I just referred, excuse me. 

          So the principle of the DSA, the Data‑based Systems Agency, principles thereof, must include the parameter settings of the developmental cycle ‑‑ must be included in the parameter settings of the developmental cycle of data‑based systems.  Now, that can be grounded, for example, in the first principle mentioned, human rights as a minimum ethical standard.  That's to say in all parts of the design.  So, again, this is speaking to ethically aligned design, that in its design at all aspects of its design and development of such systems, it must be done in such a way that in addition to meeting the system requirements, the technical system requirements, it must also meet the requirement of filling the conditions of satisfaction of human rights. 

          Principle 26, of course, works in conjunction with this, that all design, production, provision, operation, infrastructural, data‑analytics companies and stakeholders must have relevant knowledge.  Of course, we expect this, even today.  But not just relevant knowledge and competency and skills of a technical nature.  But to hold them by a higher standard.  That is to say having skills also in applied ethics, having these sorts of expertise, as expertise that are germane to developing sorts of systems that we want to be used in the public space. 

          So one example ‑‑ so how this applies, actually, and just ‑‑ this will just explain some of the rational motivation behind this.  So that principle 26 can be grounded in the explainability requirement.  And the rational motivation behind that is as follows: And this is borrowing from the work of the philosopher of artificial intelligence, Nick Bostrom.  Whenever we want to use such system in the public space, they must fulfill the social requirements of that usage.  And many of these social requirements are ethical in nature. 

          I'll give you an example.  There has been a problem that's being addressed by a research group at Brown University, I believe, a problem existing in the United States of using such systems to determine recidivism.  And recidivism, of course, is if someone has been to prison, their likelihood of returning to prison.  So one of the problems ‑‑ and this is an example of what not to do and why we need to take human rights as a minimum standard more seriously with respect to such systems and their use in the public space ‑‑ is that these systems, as they have been used ‑‑ I believe it's in the state of Illinois, in the Midwest ‑‑ they were used in such a way ‑‑ or they were designed in such a way as to have biased algorithms.  And one of the results of these ‑‑ of this bias in the algorithm design is that it was automatically waiting as more likely persons in virtue ‑‑ persons who belong to a particular demographic group in virtue of belonging to that group. 

          So, for example, African‑Americans and Latinos were more likely to return to prison if they had been in prison.  They are rated with a greater likeliness, a higher likelihood than, for example, their white counterparts or members of another demographic group within the United States.  And this, of course, would be something that we don't want.  And the idea here is that if we increase the fervor of our efforts in embedding human rights into the design of these systems into our conception of ethically aligned design, the idea is that hopefully that would ameliorate this problem, perhaps, and ideally remedy it entirely. 

          And the last principle is all relevant stakeholders must be held accountable.  That is to say they all must take ethical responsibility.  This principle is grounded in the human rights compliant conception of justice.  That is to say no one gets off the hook.  There could be no buck passing of responsibility.  All must be held accountable.  And the idea, of course, is that the data‑based systems agency would work to strengthen the legal framework in order to make it harder for various members and stakeholders at different levels of the design and development of such systems and their deployment in the public space can sort of get off the hook of taking responsibility for the unethical outcomes of such systems. 

          One way to think about this agency is that it would function analogously to already existing agencies.  It would function analogously to the International Atomic Energy Agency as well as a more local agency, the American Federal Aviation Administration.  And it would function as the Montreal protocol of 1987 outlawing the use of CFCs in order to protect the ozone layer.  And as such, it would share in the common features of these organizations.  That is to say it would have concrete regulations and concrete mechanisms of enforcement as well as international support and cooperation. 

          I would like to thank everyone for four patience in listening to this presentation.  I would now like to move to the discussion portion and open the floor for feedback from the audience.  Dr. Tauchnitz, would you like to proceed first? 

>> EVELYNE TAUCHNITZ: Yes, of course.  Thank you.  So maybe the slide with the discussion once again.  Yes.  Thank you. 

>> AARON BUTLER: Yep. 

>> EVELYNE TAUCHNITZ: So something we have been talking about also in preparation for this presentation is really whether the focus of human rights as a minimum standard is enough or if we would need some additional principles.  For example, based on justice and responsibility from an ethical point of view.  So to say it's like the focus on human rights already covering our need for ethical principles, or do we need something else in addition?  I would like to take the second question directly, asked the other way around, like, if digital technologies would be designed, developed, and used in a way as to fully respect and promote human rights, would there still be any need for additional guidelines?

Because we often see that especially in the corporate world that private companies or organizations at a national regional level, they have ethical guidelines apart from human rights.  Asking that way, what is the additional value of that?  Or are we not just advocating efforts there?  Because as my colleague has probably mentioned to you while I was still not here, that human rights offer a significant advantage of already being legally binding, of offering a legal framework that we can build on and that has already been recognized on an international level by virtually all states.  So how does human rights and ethics connect there?  And Aaron, do you want to read out the other two sentences, or should we first discuss these couple and then move on?  What do you suggest? 

>> AARON BUTLER: I suggest that we go ahead and discuss the two that you mentioned, and then we can move on. 

>> EVELYNE TAUCHNITZ: Okay.  Great.  So please feel free, the participants online, to leave your comments or raise your hand or people here in the room as well. 

>> AARON BUTLER: Yes.  Allan Ochola. 

>> Yes.  Thank you for the insightful discussion.  I don’t know if I can put my video on.  Okay.  Thank you for this insightful discussion.  I think one thing that I want to ask is with regard to your previous point when it comes to the legal frameworks for enforcing the AI and the biases that you mentioned.  More or less a discussion, not really a question, per se, but how some of this AI and also some of these technologies, I think transcend across geographies and also through nations.  And also you are mentioning the analogous system on how this model is trying to use.  And I'm imagining a system like in Africa, sometimes people are in governments where the definition of human rights keeps on changing, so imagining how that system works.  So its implementation might actually be interpreted differently and also who defines these human rights.  I think those are questions which I think also need to be discussed.  And also the model.  For instance, now you are talking about the difference when the firearms also under the atomic weapon so which algorithm‑based.  Who gets to be responsible?  I know you're mentioning about everyone being accountable.  Again, who bears the greatest responsibility?  Is it the person who pressed the send button or who manufactured the weapon or who designed the algorithms?  I think those are questions which I think I want to throw out, and you can also comment and chime in also in some of the discussion.  Nonetheless, a very good discussion at the IGF.  Thank you. 

>> AARON BUTLER: Thank you very much for your comments and questions.  Dr. Tauchnitz, would you like to begin? 

>> EVELYNE TAUCHNITZ: I think an important point is, yeah, the question about implementation, because one thing is we have human rights frameworks that are in place, recognized at least formally within the United Nations.  But then huge questions, of course, how do we get it implemented?  And what does responsible technology mean?  Like, is it the people behind it who are designing it, as just mentioned, or is it the ones who are actually giving the orders?  Or, like, when and how it is to be used?  And I think it's really about getting things from the paper to the real world, to say.  And I think there it's also important to talk about the right remedy.  Like, if harm does happen or if somebody is discriminated, like is there any access for remedy?  Like, what happens in that case or any enforcement possibility, which is also going to be another discussion point, which we mentioned there. 

>> AARON BUTLER: Yes.  Yes, indeed.  One of the things that ‑‑ one of the things to also keep in mind, and this is actually a question, and this will further ‑‑ hopefully this will further Mr. Ochola's point.  So we have existing bodies, right?  Like the GDPR, officers who are at various companies whose job is to basically enforce those standards.  And various other existing agencies.  What benefit would, then, the global agency or institution that Professor Dr. Kirchschlaeger is recommending would add to that?  Would that help to shore up existing agencies?  Would it replace them?  This is something that needs to be considered very carefully.  And as part of the inquiry as to the details of the enforcement mechanisms?  In order to ensure this?  I mean, one benefit, for example, that such a global agency could have is that it could help coordinate the efforts at regional levels, right?  But is it necessary, or is it superfluous?  These are some things that need to be considered.  I just want to check the chat for a moment.  I think someone has a question.  Before we move on to this question, I just want to give Mr. Ochola ‑‑ I hope I'm pronouncing your name correctly.  And my apologies if I'm not. 

>> Yes, yes, yes. 

>> AARON BUTLER: I wanted to give you a chance also to respond.

>> Yeah.  I think it’s a very interesting dialogue.  I would say I think also just having more voices also in this conversation, also the dialogue I think that needs to continue.  So I think when enforcing the human rights, that all of our voices are included as part of it.  And also involving also, I think, to have -- my thinking is that probably even more government needs to be brought on board in some of this enforceability.  But it doesn't mean only a private sector-driven initiative.  Some of the policies also can have also have some impact because I'm talking now from an African perspective and also a client perspective.  I think we have implemented data, but it's a young decision that I think was launched last year, so it’s still a young one in the Data Protection Agency just to ensure this data protection.  But I think also those issues of strengthening this institution, knowing areas where the data protection needs to come.  It needs to be enforced. 

          I know it's GDPR, but I don't think ‑‑ I think also the level of ‑‑ the level of ‑‑ let me say of its applicability and also enforceability in Africa specifically, I don’t think it’s that much higher.  So even if they’re GDPR officers, they may not be able to.  I think we’re also looking at the inclusive way because – I’ll go to my issue.  My issue is I don’t understand borders, and it’s not something you are locking like in the U.S. or maybe Europe alone.  It can even be enforced anywhere.  Issues about bias, issues also about just bringing more voices on board.  That is probably what I would actually really say at this time. 

>> AARON BUTLER: Would you say also that it's a problem of the ubiquity of the different mechanisms, right?  An enforcement mechanism needs to be fully embedded at multiple levels and have a sense of ubiquity for it to really have the teeth it needs in order to function as an enforcement mechanism. 

>> Yes.  Yes.  Even from a policy perspective, I think the African Union, this year, I think it's when they began forming that AI task force on maybe AI and assessing the harms and of this AI system.  So the discussion is still pretty at the infancy level, especially in the African perspective.  But people are trying to at least have awareness, I think.  One thing I like about the forum at least is it exposes these things into the open so that policymakers can actually really know exactly what to implement.  So I think the discussion is okay.  I mean, because at least it’s exposing where really people really need to actually point towards it also gives people direction where we really need to focus on more voices.  Yes. 

>> AARON BUTLER: Thank you. 

>> Yes. 

>> AARON BUTLER: Thank you.  I want to allow the person who ‑‑ Marta Grabowska.  I'll go ahead and read the question aloud.  This is in the chat.  What relationship between this institute and the ETSI be? 

>> Yes.  This is my question.  Because ‑‑ because ETSI already has some standards on ethics and artificial intelligence.  So what would be the relationship between these standards and this is what future institutes would be able to do? 

>> AARON BUTLER: Dr. Tauchnitz, are you still with us? 

>> Pardon? 

>> EVELYNE TAUCHNITZ: Yeah, sure.  I mean, that's, in fact, exactly the third discussion point that we listed there.  Like, if we would have such international data‑based systems agency, as the author of the book Professor Kirchschlaeger suggested or proposes, how should that be organized?  And the question of an additional value, of course, is significant.  And maybe it's a good ‑‑ a good moment to lead over to this discussion point, and also discuss in more depth, as we are now at the IGF, what the role of the IGF would be, then, because, like, maybe they're to distinguish there by we also listed there, there's, like, different entry points, I think, because one thing is you can enter more from the technology side, and then set up, like, typical guidelines and standards, or you can rather do it from the protection side, which are human rights.  Like, for instance, the Office ff the High Commissioner of human rights in Geneva, they're working a lot on translating human rights to the digital space, so that is more the protection approach.  It focuses on the human and then asks what boundaries, what technology we have to respect.  Do we rather want to go from the technology side, or do we rather go from the protection side?  Because that has implications, then, like where such International Data‑based Systems Agency could link up with or already build on or should it rather be a new separate agency I think is a good moment to discuss that. 

>> AARON BUTLER: Yeah.  For my part, one thing that I would ‑‑ and this is just a follow‑up ‑‑ this dovetails Dr. Tauchnitz's response.  So the two vectors, namely on the technology side or the protection side, one advantage of such an agency has been suggested here is that it would seem as if both vectors of approach would be advantageous.  And one of the things that such an agency can do is it can help to work with existing bodies in order to help straddle both approaches to pursue them simultaneously.  So one thing that it can ‑‑ one purpose that it can serve is not so much to replace existing bodies but as a means of coordinating collective intentionality towards improving the situation, by concretely following both vectors or work streams, right, on the technology side and the protection side.  That could be one, for example, one possible relationship that could exist between the DSA and the institution in question. 

          Mrs. Grabowska, would you like to follow up? 

>> Yes.  Thank you very much for your (audio fading in and out) Nation.  This was, in fact, what I had in mind.  It was that in some of the standardization body that are already substandards where the ethical issues are already included.  You can find such standards where explicitly ethical issues are already included, the condition, ethical conditions.  So the question is, if there is any cooperation, or will there be any cooperation, or how ‑‑ I mean, such a new institute is going to undertake some sort of conversation or exchange with this standard organization which is already quite advanced in producing standards, including ethical issues already. 

>> AARON BUTLER: Yes.  Absolutely.  So from my point of view, and I'm in many ways speaking for Dr. Kirchschlaeger, but one of the things ‑‑ one of the, I think, the main contributions that the DSA could make.  So if you imagine all of the different efforts, the different standards that already exist, sort of ‑‑ or weave together and working in dialogue.  So what we want, then, is, of course, increased cooperation, increased strengthening or strengthening of the collective action.  A tighter weave, if you will, so we can get a covering of the problem space. 

          One of the benefits that I think that the DSA could offer is to help coordinate those efforts, right?  So a lot of the examples that you mentioned are happening ‑‑ they're happening at an international level as well, but they're also happening locally, right, and regionally all around the world.  One of the things that the DSA could do is not so much replace that, right?  I mean, there's ‑‑ of course, those things can be improved, about up there's no need to replace them.  There's no need to reinvent the wheel. 

          But one of the things that the DSA can do is to help contribute to coordinating those efforts so we can have a truly global covering of the problem and a global improvement, an optimization, if you will. 

>> Yes, I understand, of course.  But it is a huge task, I think. 

>> AARON BUTLER: Yes. 

>> To create such sort of institution because many, many bodies have ‑‑ I mean, work out already some sort of ethical rules, which are included in many, many official documents which are already implemented. 

>> AARON BUTLER: Yes. 

>> In other areas.  So anyway, thank you very much for this clarification. 

>> AARON BUTLER: You're welcome.  You're welcome.  And I would also say that Professor Dr. Kirchschlaeger does not pretend that the situation would be easy.  Of course, it would be difficult.  But once again, focusing on analogous organizations like the International Atomic Energy Agency, again, of course, there are regional mechanisms to help in this regard, but it has an effect of coordinating these efforts.  And, of course, that's not easy.  It's not easy with this agency, and it would certainly not be easy with the DSA as has been recommended.  Thank you very much for your comment. 

>> Thank you.  Thank you. 

>> AARON BUTLER: There's another ‑‑ oh, yes, please.  Dr. Tauchnitz. 

>> EVELYNE TAUCHNITZ: Maybe if I could add to that.  I think something which is really key also is to make all these existing efforts and standards and also legally binding because they are just on the recommendation level.  Or they're, like, more kind of ‑‑ like a nice to have thing, or what is often referred to as soft law.  But it's really key to also make these standards and already existing norms legally binding because if it's just a choice that, for example, tech companies are also governments can choose and pick which standards do we like and which ones do we not like so much, then we ‑‑ I don't think we really are where we want to be because human rights are not the most comfortable thing to comply with.  I mean, they're comfortable both for governments and for companies.  So we have to make sure that they're also respected and promoted when it might not be that, like, that easy, so to say.  So it shouldn't be a pick and choose.  So that's really a problem I often see with ethical standards and guidelines, that they're all really nice, but it's rather kind of a pick and choose attitude, especially from private companies. 

          And I think it does make sense to try to make them legally binding and, like, also building an already existing frameworks like for example the U.N. Guiding Principles on Human Rights that specifically address what human rights mean for the business world.  And unfortunately they're not legally binding.  It's just their recommendation.  So I think one task of such agency may be to also really, like, go into the process of how can we meet ‑‑ make all these norms and standards legally binding and to guide this process? 

>> If I may still, if it would be under the United Nations umbrella, of course, you would not be able to issue any binding (?).  But, for example, if it would be under, let's say, the Council of Europe, so Italy in some areas, it would be binding.  So the problem ‑‑ the additional problem is how to resolve the problem, I mean, in the terms of the whole world.  From the legal point of view, it will never be possible.  There would probably be some sort of recommendation which in some areas probably would be ‑‑ would be discussed maybe or even taken into consideration as a legal act like, for example, the case of European institutions, or in other cases, I mean, very difficult to obtain things and be a legally binding law, it seems to me. 

>> EVELYNE TAUCHNITZ: If I can maybe add to that.  That's really an advantage, I think, if we already build on existing human rights frameworks, because at least the two pacts of 1966, there are legally bindings.  I mean, that's treaty law, and all of the signatory states of these two pacts, of civil and political rights and also economic, social, and cultural rights in 1966.  This is treaty law and binding.  The problem is not so much the question like how to kind of make ‑‑ it's not that the human rights are not legally binding.  It's rather like how to apply them online in the sense of how do we interpret that and how do we make sure that they're respected and what happens if they're not respected?  I think that is the big question here. 

>> Mm‑hmm.  Yes.  Thank you.  Mm‑hmm.  Thank you very much. 

>> AARON BUTLER: Thank you.  Thank you very much for your comments and questions.  We have 7 minutes remaining. 

>> May I follow up on something? 

>> AARON BUTLER: Yes, please. 

>> Yes.  What else I just wanted to also do -- maybe just to -– I agree with what everyone is mentioning.  But also when coming up with such a framework, it means we -- the DSA is more or less providing, like, a direction, like in terms of what the gold standard should be in terms of human rights protection.  That is what I understand from this discussion.  But also, I think on that question that also maybe someone can encounter in the future is which model do for -- what are the best ‑‑ what best practices do we have in terms of -- that we can adopt or that we can follow where we have come up with the lessons to come up with this human rights framework.  I think that will also be something also that people will definitely ask you, even if not in this forum, but I'm sure someone else somehow I believe we want to know exactly if you’re telling us to follow this direction where we have seen it being implemented or something like that.  But also –- just also in anticipation of definitely for it to see the day of light, one thing that is quite certain differently, there will have to be collaborations.  It can’t work immediately.  The organizations are so many.  I know International Standards Association, they also -– this body that does standards, regional standards.  I think they are also involved in AI algorithmic standards.  I think they're pretty much globally everywhere.  I think that’s more credible when it comes to developing standards.  I'm not sure how they are involved in terms of human rights, but I know in terms of standards, just developing standards for, like, even the machines and everything, maybe you might have heard of them.  So that is people who – I think they might be relevant, too, but also even just also offering capacity building also so that it doesn’t just become only one from one point. Also issues around capacity building or how do you use the capacity building using the best model approach that you can directly go to.  The people, you know, organizations can actually come online and go back and implement.  I think that maybe might not be so hard and to, like, maybe give direction to.  So those are some of my input when it comes to issues of DSA. 

>> AARON BUTLER: Yes, yes.  We have 5 minutes remaining.  One thing I would ask in response, actually, is with respect to ‑‑ in response to your comments, Mr. Ochola, is in terms of developing a best practice model that, for example, that the DSA could follow in conjunction in working with other organizations, already‑existing organizations and standards, what would be ‑‑ in your opinion, what do you think would be the next concrete step in that regard?  For example, that would help a situation on the African continent, let's say, for example. 

>> From the African continent, let me frame that most of these things are ‑‑ I would say they are still at the infancy level.  So it has to be ‑‑ there is no ‑‑ most countries are probably just developing them from their own experience.  For instance, I know in Egypt they're doing quite some work, in Kenya.  So it has to become at a conference level where they'll share their best practice forums.  That is areas we have seen people come and sharing their models.  And also publishing those models because people rely also if maybe   I’m publishing –- I know most publishing works a lot when it comes to some of those business practice forums. 

          And also when people see it's more multistakeholder and evolving, I think that is also areas where people look at.  And also the open – for example, Kenya is actually involved in areas around open government partnerships where people come and share how they actually ‑‑ people share their platforms and also these are some of the best practice forums.  They’re not legally binding, but I know they are normally best practice forums where people can actually just come and go with takeaways.  And those are some of the ways, but those are one of many things I can think right now, but I’m sure there could be others maybe if I sit down, I can probably think of them better in a maybe more complete way than right now, yeah.

>> AARON BUTLER: Yes.  Thank you.  Thank you, very much, though, for your comments.  I would also like to briefly ‑‑ we have just 3 minutes remaining.  And I know towards the end, I saw online that the remote hub in Bangladesh has joined us.  Welcome.  Thank you.  Thank you for your participation.  I just wanted to acknowledge your presence there. 

          One thing in wrapping up this session, we have 2 minutes remaining in terms of the legal binding aspects of the mechanisms of the DSA and the sort of perspective of developing human rights‑based data‑based systems, what are some, I guess, parting recommendations for strengthening the legal bindingness of such an agency like the DSA?  We need to really get to a standpoint in which, from an ethical perspective, we need to have more things developed and on the books, if you will, that are not just nice to have but that are legally binding. 

          And in terms of an approach to this, what would be a concrete step that could work globally?  I just wanted to put that out there as an additional question to the audience.  That is to say in addition to the ones on the slides.  And anyone can respond. 

>> I think definitely if you're going to mention something on a legal binding basis, definitely it has to come from common agencies or maybe data protection agencies that are involved.  That is probably my one way of thinking.  Because it can't be binding if you don't involve definitely stakeholders.  So it has to be a multistakeholderism approach even if it has to be on a global policy level.  I think utilizing is a bit important because you develop a standard. 

          I think also analyzing it is very important because there could be global standards, but also -– I think also it has to –- my view is also it has to be localized in some of those, as much as the global standard be localized because like someone mentioned, human rights might not mean in Kenya might not be the same as Uganda, might not be the same as Tanzania.  So I think that is something that also deserves to be taken into account.  So there could be best practice forum at a global level, but also the real work is also how do you localize some of these things at a regional level?  Because one way of doing that, their bodies in Africa, as much as they (?) the Eastern African Confederation.  Those are bodies which I think they are actually having also initiatives, and I think these are  discussions that they would really welcome in some of their forums in terms of also how to move forward. 

          If you approach some of these, they’re very open if you have probably some of –- you know, if you have, like -– like, it’s very exciting to have these discussions to develop and to listen to them, to listen to such open discussions.  It may take time, but at least the progress is there.  Yeah. 

>> AARON BUTLER: Yes.  Yes, of course.  Well, we are out of time.  It is now 1:01.  We thank you all very much for your participation and for your patience and your presence here at our session.  Dr. Tauchnitz, would you like to say any parting words? 

>> EVELYNE TAUCHNITZ: Yes, thank you to everybody who participated here, both in Katowice and online.  I think this is an ongoing discussion, and, like, whoever would like to continue this discussion, please get in contact with us.  I think also Professor Kirchschlaeger who unfortunately could not be here with us.  He would be very open to address any of your questions.  And you can either find the contact on the website of the Institutes of Social Ethics, or maybe you can also share it online again with us.  So, yes, please.  Let's continue the discussion. 

>> AARON BUTLER: Yes.  Thank you.  Thank you.  Let me go ahead and share on the chat a link to the website of the organization.  One moment, please.  I'm loading it.  There we are.  Whoops.  There we are.  All of the contact information for the Institute is at that address, is at that website.  We thank you all very much for your participation and for your time and for your comments.  They're very helpful, and for the critical discussion.  We wish you all a wonderful rest of the conference in Katowice at this year's Internet Governance Forum.  Thank you very much. 

>> Thank you.  We'll be in touch.  We'll be in touch.  Thank you.  Bye. 

>> EVELYNE TAUCHNITZ: Thank you.  Good‑bye.  Thank you. 

>> Bye.