This is now a legacy site and could be not up to date. Please move to the new IGF Website at

You are here

IGF 2020 - Day 5 - OF49 Upholding Rights in the State-Business Nexus: C19 and beyond

The following are the outputs of the real-time captioning taken during the virtual Fifteenth Annual Meeting of the Internet Governance Forum (IGF), from 2 to 17 November 2020. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid to understanding the proceedings at the event, but should not be treated as an authoritative record. 



   >> MARK HODGE:  Okay.  Hello, audience.  Good.  Let's kick off then.  Great.  Thank you.  Hi.  Welcome, everybody, to the session.  Thank you for joining us today at this IGF panel.  My name is Mark Hodge, a Senior Advisor to the UN Human Rights B-Tech Project, and we're delighted to have you today with us, and for those not familiar with the UN B-Tech Project, it's the OHCHR project to focus on the advancement of the implementation of the UN Guiding Principles on Business and Human Rights in the context of technologies, but another way it's really to look at how to leverage experience from corporate accountability and corporate responsibility field into discussions about responsible use of technology.  It's not the only piece of work that OHCHR leads with regard to technology by far, there are lots of other projects and efforts not needed by very special procedures and the office itself, but our particular focus at B-Tech is to provide clarity, convene diverse stakeholders, to explore challenges, and produce, ultimately, some guidance and recommendations around how to advance stronger application of the guiding principles.  There are four focus areas in the project, and I will not go into them in depth.  Some of my colleagues will post links to the B-Tech resources and those who want can look at that.  We have four focus areas and one is looking at how to address risks to human rights that are inherent in company business models.  The second one will be looking into human rights in end use.  And the third focus area is really unpacking access to remedy and what does it mean and complexity of ways that technologies can impact the rightsholders.  And the fourth focus area is trying to unpack what it really means to states to apply a smart mix of measures to how they engage and make sure they uphold their end duties to protect human rights in the sort of digital era.

Really, please, a cross-cutting area of work focusing on investors, so please just keep an eye on our project pages as we begin to produce more and more kind of outcomes and share insights from various experts there.

So today's session as stated in the title is diving into an aspect of the smart mix of guiding principles, the UN recognize something called the State Business Nexus and recognize that states when they contract with, when they procure from, when they license technologies, when they partner with technology companies, need to take their own role in guiding those companies and incentivizing those companies to operate respectfully, but also need to make sure that they meet their human rights obligations as states as they do so, so this plays into examples we hear about the digital welfare state, the use of technologies in the criminal justice system, in the police, also in intelligence and defense of course as well.  Also today realizing the global pandemic put that into sharp focus.  A need and dynamic and states are looking into technologies to help address and build their strategies and actually provide robust responses and effective responses to the global pandemic.

So, the question today we've got four excellent sort of speakers to help us unpack how do we do this?  What are the dynamics, what do we need to think about as we think about this business nexus, and what are the reality faced by different stakeholders?  We've got Gary Davis the Global Director of Privacy in law enforcement at Apple, and Gary, maybe you can give everyone a wave in case they can't see your name.  Stephanie Hankey, Executive and Co-Founder of Tactical Tech; and again, Stephanie, if you could wave, Phil Dawson and John based in Australia, and we thank you, John, hugely for being with us at a very late hour where you are.

What we're going to do is have a quick round of initial inputs from each of the panelists and then have a discussion, open up a discussion among the panelists.  We worked very, very hard to bring in the questions that you raised, there is a Q&A function as well as a chat function, so please use that and then we'll do our best to find time at the end to try to address some of those questions, and please share questions anyway because from the B-Tech project perspective.  There are questions that we can pick up and it gives us a signal of where we need to go with our work more widely.

So, Gary, I would like to start with you if I can.  We know that Apple and Google have been central to sort of trying to provide the technologies to help states and others find ways forward through your exposure notification work and really committed and tried to make a privacy-oriented stance on this.  We love to hear, we read a lot all of us, so it's a really good opportunity for us to hear from inside, Gary, your experience of this journey that you've been through as the over the past months.

   >> GARY DAVIS:  Certainly, and thank you, Mark, and thank you very much for inviting Apple to participate on this distinguished panel today as part of this year's Internet Governance Forum.  In many respects, I think we can now say that the COVID-19 pandemic has reaffirmed that the fundamental principles of data protection and privacy remain alive and well.  Perhaps this was a time there at the start when governments were rightly moving fast to protect their citizens where it may not have turned out that way but that passed quickly.  And I would like to take you back to what seems like a lifetime ago now to a Saturday afternoon in March, I joined Apple colleagues to talk to the Irish Public Health Authority in relation to support which they were seeking from us in relation to the use of Bluetooth technology to establish close contacts between persons subsequently diagnosed with COVID-19 and persons they may have been in contact with.

Over the coming days and weeks, those contacts actually intensified with public health authorities around the world, and culminated in Apple and Google announcing an unprecedented initiative, really, on contact tracing.  And in the weeks leading up to that announcement on the 10th of April.  We were really in continuous contact with those public health authorities who were seeking our help.  So therefore, our response to work together while unprecedented, was appropriate to the emerging situation that we've all found ourselves in.

What was clear to us, however, very quickly was that some of the contact tracing ideas that were coming forward posed a risk to privacy and human rights as they involved extensive data collection on citizens, but more specifically in fact, some of the proposals would have actually undermined their very purpose, broad adoption, and our view was that the only way to achieve broad adoption was to ensure and maintain user trust.  Here in Europe we're fortunate like many places around the world that we have a strong legislature base that protects privacy.  But even here, the legislative base cannot maintain trust on its own, so we recognized immediately that there must be fundamental principles we must adopt and be very public and open about.  That is why over the weeks between the 10th of April and the launch of what we became known as the Exposure Notification API we had a number of interim public updates to provide more information to public health authorities and the general public.

These updates were completed by extensive engagement with the media, privacy regulators, privacy advocates, and of course governments.  Transparency was and is a must in this space.  Transparency, of course, while a minimum requirement is not sufficient on its own when public health authorities want and need broad adoption of their contact tracing initiatives.

The second important principle is data minimization.  Together with other initiatives emerging at the time such as DPT3 we believed effective contact tracing could be used by using a decentralized system which maintained control fully in the hands of the individual.  So therefore, we sat down some basic privacy protection principles, and those principles were users must explicitly choose to turn on exposure notifications, so they have to actually take the action to turn it on on their device.  The exposure notification system does not share location data from a user's device with the public health authority, Apple, or Google.  The exposure notification system doesn't share the user’s identity with other users, Google, or Apple.  Matching for exposure notifications is only done right here on the user's device, and users decide whether they want to report a positive diagnosis.

Importantly, the system is only used for exposure notifications by official public health authorities and isn't monetized, and also importantly, Google and Apple made a commitment to disable the exposure notification system on a regional basis when it is no longer needed.

We remain still very much engaged in this space and continue to iterate on exposure notifications in response to requests from public health authorities to support them.  We held over 100 technical briefings with those public health authorities, government officials, and others, so we remain in constant contact with them.  Just in the last few weeks, for instance, we announced exposure notifications express, which is a new way for public health authorities to deploy exposure notifications quickly and easily avoiding the need for them to build and maintain an app.  Public health authorities with exposure notifications, remain fully in control of how notifications will be triggered, what next steps to advise their citizens on, and guidance to expose notifications for further contact tracing.

Importantly, however, our bases can line privacy and human rights principles which I just outlined have not and will not change.  I also want to briefly mention something about which I'm very proud outside of the contact tracing space, and that is the Apple Maps Mobility Trends Reports.  At the start of the pandemic and as governments responded with lockdowns, we started receiving requests from some of those governments for data that would help them to understand how their populations had responded to the lockdown orders.  We wanted to help, as that has remained our approach throughout the pandemic, but clearly, we needed to do so in a way that aligned with our position on privacy as a fundamental human right.

Fortunately, therefore, Apple Maps was already designed from the ground up to reflect those principles, as use of maps is tied to random rotating identifiers on the device and not a user's identity.  However, even with those protections in place, we did not consider the rights that were to simply provide raw data in response to such requests, so instead we worked over a very short deadline to produce trend data against a baseline of 100 from the 13th of January of this year so governments could then visualize the movement up and down but not receive the individual data.

We also considered the issue of fairness and availability of this data.  Why should it only be available to those governments that thought to ask us and actually had the capacity to integrate it?  Therefore, we took the proactive step to publish the data on a worldwide basis with useful visualizations once a minimum number of journeys were undertaken in a particular region so as to ensure that there was no possibility to link the data back to individual journeys.

We continued to publish that data today and have committed to doing so as long as the pandemic remains a public health concern.  And with that.  Thank you very much, Mark.  I hand it back to you.

   >> MARK HODGE:  Thanks, Gary.  We were able to give our panelists time to also ask questions and come back as well in a moment, but I wanted to just invite Stephanie to sort of layer in other perspectives, you know, the question of the State Business Nexus and from your perspective, how should we even just be conceptualizing and where should we be focused.  I invite you to help us widen the lens enough around this question related not just to COVID but more widely.

   >> STEPHANIE HANKEY:  Thanks.  Yeah.  I can talk about technologies in the context of crisis and disaster more widely, but I think there are so many technologies just within the context of the pandemic, so it's great to hear from Gary about two specific ones that Apple were involved in, but you know what we see is that technology is completely central to the response, and that it's a big problem that needs a quick response.  And so technology and so technology-thinking or data-centric thinking is definitely at the forefront of responses all over the world, and I think we tend to think of, for example, track and trace systems first and foremost, but I think what we're seeing is, you know, not only in the micro-level in terms of how to help but also on a bigger level all kinds of innovations like the Map Trends that Gary just mentioned, but also lots of technologies needed for modeling and predicting and understanding the risks at a broader scale.  Technologies for monitoring it and also getting kind of feedback in an ongoing way, and being able to do things like predict clusters or monitor breaks can outs and so on, but also for mitigating and how to even enforce lockdowns and control it and that kind of thing, and then lots and lots of technologies coming, seemingly innovations in technology coming like wearable trackers to figure out if people left a space or not, whether people are staying at home or not, and also not just sort of looking at behaviors as well on a much bigger level.

And I think we see all kinds of technologies emerging which feel new, but mostly have been around for a long time or have been used for other things, and so I'm reminded of, I think, Naomi Klein's work thinking about the chain which is that in the crisis we have the ideas laying around, and I think that's what we're seeing here.  I think the media reported it as new technologies for the pandemic, but actually what we see is increasingly enormous amounts of technology that's been around for a long time being used, and I'm thinking about specifically the type of data, so for example data from mobile phones, data from credit cards, data from TV cameras, home sensors even, but also technologies that have become sort of more clearly used or at least very often suggested to be used, not necessarily always successfully used and I'll come to that in a second.

What we definitely saw, especially at the beginning, was I think an unprecedented synthesis of private and public data.  I think, for example, if you think about what happened in Singapore, you ended up with information from location data, for example, from a mobile phone combined with credit card data, combined with travel data, combined with health records, and sometimes you see TV cameras as well, and that's a kind of data synthesis we haven't seen before and definitely not on the public level, so that raises a lot of questions about consent, about you know how these technologies which were mostly used by law enforcement previously, that kind of data synthesis was used -- is used commonly, so for example for intelligence in law enforcement.  So, what happens when you sort of overnight start applying that to citizens in the public space?  And what kind of questions does that raise?  And I think that's what interesting in that is that it has also been largely transparent.  There are definitely some governments and instances where we haven't known about the technology, but very often the technology is testify might not be transparent, but the fact that they're using the technologies has been relatively transparent even in some countries where intelligence services offered overnight, the technologies that they use, mobile phone tracking in Israel, for example, in the context of we've got this anyway and it works why don't we use it kind of thing, so I think we haven't had enough -- I was surprised even as everybody has been looking at data privacy and security and surveillance for really 20 years, that we didn't have more of a discussion about that sort of overnight switch, let's say, on synthesis and use of data that we saw.  This is very different to what Gary was talking about in the context of Apple, of course, and the reason why I think it's different as well, and I just want to flag it, is I think there is so much emphasis on these very large platforms and technology companies and the efforts they've made, but what we're seeing actually is entire industries that we perhaps are less aware of in the public eye, less aware of what the technologies are that are now being used all over.

And I'm thinking of technologies being developed by researchers and academics and small-scale startups and so on but also an enormous amount of technology coming from surveillance companies and being used by governments as well, and of course, maybe not that surprising, if you want to do disease surveillance or you know help people track individuals in surveillance with technologies, the same technology that you use to track criminals or terrorists or in some instances they're used to track human rights offenders and journalists and those technologies are extremely useful in the context of tracking in this case coronavirus positive patients.

And we've seen quite a few examples of that.  Some just suggested and some not necessarily proven, but certainly a lot of activity in that space.  For example, an NSO group in Israel who does mobile phone technology for tracking surveillance are now applying it to the pandemic saying AI which uses facial recognition technologies, and you know it's endless, actually.  I mean, you could literally do a whole session on all the technologies that have been used in all the different contexts, and it's global as well.

But I think it's kind of raised questions also about what is that data that is there already, and I'm surprised that we haven't had more of a public discussion about the role of behavioral data because that's what's -- despite all the different examples of data being used, at the heart of it in terms of the most valuable type of data, it's really location data and you know information on where you are, how often, for what length of time, who you're next to, you know, and how close you are to other people, and even if you're at home, has become much more valuable than I think anybody could have imagine, and I think this is technology that has been there in the marketing, targeting, profiling industry already but not really used in the context of health previously, and in that context, I think the idea of consent becomes really complicated.  I just wanted to mention maybe one of the companies that is particularly controversial right now which is (?) an example of a company that most people probably have not heard of before, but they've been working for a very long time and they are an example of the kinds of technology companies where there is generally some slightly skeptical view of the company, maybe there is not that much transparency around how the company works, maybe the types of organizations they normally work with is quite complicated, gives them a complicated profile.  For example, Palantir does surveillance predictive analysis and modeling technology, so the way the technologies are being in the context of the pandemic, for example, they get hundreds of datasets together and use that to help the government get an overview of movement of people and of COVID cases in a country because a level of data complexity they're not used to handling as governments, but they're also now talking about an art to using technologies, which helps governments do tracking and tracing as well, but the reason why it's controversial is that they, for example, have worked with the government for years with border control, predictive policing technologies, and these are not consumer technology companies.  And.

And in the context of the UK Government, for example, they've been very I would say governments need to show that they're doing something and need to have these big solutions, and I think they've gone after partnerships like Palantir despite the fact they may not have been the human rights protectors or safeguards in place necessarily.

And maybe just, I know we probably don't have too much time but maybe to say one more thing, I think there is -- there is a very big focus on the discussion and there should be on privacy and security, and the examples that Gary gave about what Apple was able to do and they're able to do with the tool is really important, and I would go so far as to say that based on the premises of basic data collection and privacy and security right now, it's necessary for any responsible company to take the kind of steps that Apple has taken and Google, and you know I would say in the community of people, not only corporations but also in the sort of alternative technology scene, there are some basic principles and guidelines but all companies should be following it.  I don't think we should have that discussion anymore.  That should be a given, I think.  The reason why Apple and Google were trusted in that context is not only because of their scale and ability to deploy things without kind of confusing the public, but also because they're trusted in terms of not making those kinds of mistakes and also protecting people's data because as we know from a lot of the homegrown government solutions can lead to mistakes and lots of failures in the systems as well.

But there isn't enough emphasis on the other discussion which is why they're able to do it.  They're able to do it because essentially they have a monopoly in the space, they're able to do it because of the kinds of brands they've built up as kind of being the main player in the space, but they also get to define or in that case we have to think about like who gains the insights, who has the insights, who gets to define the problem, all of these kinds of questions we're not asking.  We're very focused on the privacy of the individual and we're not always thinking about are we really mature enough to think about, for example, a public crisis like this, where is the public good in that data, where is the public interest in that data and why working at private companies in that context.

So, it's like I guess in summary of what I've just been saying, there is not enough emphasis on the whole industry and all the different small players.  But then when we look at the big players, are we asking the right questions even, and I would say that privacy and security is a given and they have to operate like that.  What's the next question we need to be asking in these kind of contexts?

   >> MARK HODGE:  Thank you, Stephanie.  You mentioned a few times sort of the states sort of working with who they're already naturally working with and sort of repurposing some of those things, and I suppose even though there is transparency there is a question there as you said whether states are asked, whether states are asking those questions and whether we're asking states to ask those questions, right, in that context as well.

Phil, I would like to turn to you, and sort of Element AI is really engaged around human rights issues clearly in relation to things like waterfront Toronto and Sidewalk Labs and I wonder if you have a point on how the state business nexus is playing out in the context of smart cities and also begin to help us think through, what do we do around this?  What are the solutions moving forward to help us put mechanisms in this business nexus to ask the questions Stephanie is talking about but also to make sure that we've got some governance around some of those things?

   >> PHILIP DAWSON:  Thanks, Mark and Gary and Stephanie so far for bringing some really interesting things to the table, and hopefully I can make some good links.

Yeah, States Business Nexus has been around before COVID, obviously, and one of the areas when it comes to digital technology where it's been in focus already is definitely been Smart Cities.  Which is interesting because in some ways, municipalities or local governments have not -- are not the historical or traditional subjects of international human rights law and yet they have been looking to it increasingly as a tool to help guide the deployment of digital innovations in their cities for a variety of purposes.

But it does get a little bit -- it can get a little bit confusing because just given the history of development projects not only in cities, but typically proceed through private developers, and so it provides a kind of opportunity or an instance in which the responsibility for the development can fall into a bit of a gray area.  In Toronto last year our team at Element AI led a human rights impact assessment, or preliminary human rights impact assessment of Waterfront Toronto's Sidewalk Labs proposed digital innovations for the City of Toronto which they won through a request for proposals from a quasi-public entity Waterfront Toronto, and before even the deal was closed and the contract was agreed to, Waterfront Toronto committed to the preliminary human rights impact assessment, and what was interesting is one that was in and of itself widely viewed as a very bold move by Waterfront Toronto at this stage with, you know, the deal still being negotiated and as probably many of you know, ultimately the project didn't -- it fell through.  But at this early stage, they commissioned a human rights impact assessment of the entire proposal, and what was interesting is that in that call for proposals for the human rights impact assessment, there were references to the UNGPs and references to another document as well beyond the universal declaration of human rights, the city's coalition for digital rights, and so there has been already a lot of work done by local governments to try to apply human rights principles in the local context, okay.  So, this is another document that we had to take into account as a lens through which to analyze the potential human rights risks of the project.

And still more than that, what Waterfront Toronto was doing through the assessment and consultations and other initiatives that took place was to take the city's coalition for digital rights statement, and then further translate that into the specific context of the Toronto project that was being proposed.  They were developing what they were calling the draft digital principles or ultimately, these intelligent community guidelines that were even more specifically translating these human rights principles into ultimately contractual obligations that the developer and the developer's partners might have also had to comply with.

So in terms of, and when you look at bridging the State Business Nexus and you know this is one of the avenues that was being explored is taking the human rights law, the work that cities have been doing in transcribing it into the digital space, and then building out guidelines that are more specific and could be embedded into procurement contracts, and now we didn't obviously see those because the project fell through, but it was an interesting I think a pretty avant-garde approach in the Smart Cities context.

And you know these contracts might have included penalties for intentional re-identification for, you know, unlawful surveillance, anything that was in contravention with the digital principles that the city had developed and was developing in consultation with the public.

So, you know, that was kind of a fascinating experience to be a part of and to help begin to shape, and even though it did not -- the project ultimately did not continue, I think these are important precedence to pass along.

Something that did come up that was a bit of a challenge, I think, and I think Stephanie was touching on this in part of her comments in other areas is, is the challenge of oversight.  You know.  How do you really in that particular context, how do you ensure that these contractual provisions are being respected, one with your primary developer, the master developer, and then with other developers that are subcontractors, through those contracts, so you could require them these principles to be embedded in contracts, but how do you really know if they're being respected, and then how do you require -- or how do you ensure that enforcement is real and effective?

So one of the things that came up in that context as an idea is that, you know, the city, rather than outsource so many of these innovations, should have been taking a lead role in developing the digital infrastructure that some of these solutions could have been built on, and then they would have had a better chance of owning the governance of the system that was being deployed, they would have had a better chance of also incorporating citizen participation in the governance and not just consultation pre-project, and then realizing some of the other principles that were in these, you know, in the city's coalition for digital rights and the guidelines that they were embedded in contracts.

So, the question of ownership of digital infrastructure and ownership of governance, came up quite a lot.  And obviously we didn't get to see this play out, but you know, had the project proceeded, and largely the respect for rights is governed by contracts between two private parties, and you know if you're a citizen you have complaints, so where do you go?  Who do you file a complaint with?  So, people talked about, should there be a digital expansion of the local ombudsman's role to include digital rights complaints?  Would that institute -- would that institution have then been empowered to -- and maybe this isn't for John, but to file a complaint with the human rights commission or, you know, so there were some institution building that might have needed to take place.  And, you know, obviously we didn't really get to see where this would go, but the conversation in Toronto is going on with potentially another project in the future and other cities are having these conversations too.

I want to -- before I end, I want to tie it back to the conversation around COVID and the COVID Alert App because I think that at least in Canada, I have more experience.  The genesis of the National COVID Alert App yields some lessons for Smart Cities, I think.  So obviously, Canada has been working with the Google App Foundation but what they did do then is they built out the rest of the app in house.  So, the Canadian Government civil service, the digital service built out the app and carried out consultations on the -- on the direction of the app and also the governance of the app.  And they have published a Terms of Reference with oversights of this external council of public health experts and privacy experts and technologists, you know, have as part of the council to increase transparency of the app and to report publicly on the public health outcomes that the app is actually achieving to ensure that it's actually effective and, you know, not part of the tech for tech's sake criticisms that we often hear on these projects.  It reports on privacy and it reports on any complaints that may be lodged.  And all of this is online, and I think it's been an example of public sector leadership in developing civic tech that thus far, even as we're waiting for downloads to increase and for public uptick to improve, and I think we're at 5 million downloads in Canada there are 35 plus million people so we have a ways to go and provinces to get on board, but even so as we wait for it to become hopefully more effective in terms of uptick, at least as a public/private sector project, you know I think there are some lessons that we can import then to the Smart City context, and obviously there are things that would be different because a lot of this happened quickly in a time of crisis, but I think there are some valuable lessons here about the ability of public sector -- the ability of the public sector and civil servants to themselves build technology that will hopefully be useful to citizens and then to adopt the right governance mechanisms to ensure that there is transparency and that citizens have an account of the value that it's actually bringing.  And maybe I'll just stop there.  Thank you.

   >> MARK HODGE:  Thanks, Phil.  I really appreciate that, and certainly from the perspective and what you put in force and how it's enforced and remedy for rightsholders, those governance questions are really central.  I'm conscious that you sort of mapped out, and I want to turn to John, you know, Gary and his story was sort of telling really the sort of process through which it and to be very much Apple and Google having to say look, here is how we're going to set the principles, because to Stephanie's point there is a knowledge base and within some of those companies of how to do this, and the leverage to say, no, this is how it will be done and we're into the going to do it this way or that way, and then both Stephanie sort of saying that we need to be asking the tough questions and Phil arguing for sort of a mechanism by which states take more ownership over the processes.  I wonder, John, if you could speak from your perspective, given the work that you've done at the Australian national human rights commission how do we think about even state's wherewithal capacity to answer these questions and I know you've thought about that a little in the context of protecting human rights in the digital era sort of work.  So, John for your general reflections and also comments specific things that you've been trying to move for Australia to help states or encourage or guides or require states to be robust in this?

   >> JOHN HOWELL:  Sure.  Thanks, Mark.  Thanks to all panelists.  Some really interesting reflections here, and particularly interested to hear about the different perspectives on development of contact tracing apps at the current time and the bit of work that we've done with the government here on the Australian rollout of that, but that's slightly different from the question you've asked me to reflect on, Mark, which is the ways in which government can really help facilitate all players who are perhaps developing new technical solutions to respect, protect, promote human rights.

So, one of the big proposals which the commission has made in recent times is that there is a need for some sort of specific leadership on particular subject matters.  We've been focusing very much on AI technologies and running a major project on protecting human rights in the context of developing artificial intelligence, and in that particular context, we've made a proposal for a body which might have some sort of applicability to some of the technologies and solutions talked about today specifically or there might be some analog that could be used in other contexts or other sorts of technical solutions.

I have to emphasize at the start that this is very much a tentative proposal at the moment.  We're in the middle of a sort of project, a two or three-year project, and when it wraps up, we've released a lengthy discussion paper about 12 months ago containing this proposal, and then we've been road testing it with another set of national consultations around Australia, and we'll have a final report out in the coming months.

So, at the moment, I'm expecting a tentative view of my organization, I'm expressing it, but it's safe to say the recommendation looks substantially the same.

So the concept of the AI commission arose in the context of it more generally so I might backtrack a bit to give some context for how I'm thinking about harm from resulting from AI has evolved and so that has led us towards this proposed partial solution to help, you know, harness the benefits of this new technology while minimizing those harms.

As I say, we launched a project in about 2018.  We released a couple of discussion papers since then, conducted two rounds of national consultations consulting in particular on artificial intelligence, what's being used in all forms of decision-making, principally by government but also in the private sector to a degree.

And then we've also had a subsidiary or secondary part of that project on accessible technology, including A I and other technologies as well.

What we notice is that at the time when we started this project there was a ground of interest in artificial intelligence, the benefits it could bring, the harms it could bring, but there wasn't I think in Australia or globally at that time nearly as much of a focus on human rights, so we weren't really interested in centering human rights in that discourse.

I have a slightly jurisdictional bias in the approach in that Australia perhaps has, in some ways, weaker human rights protections than some other jurisdiction so that may have inflected the discourse here a little bit but I think in general we weren't seeing a lot of discussion about human rights in terms of protecting against harms from artificial intelligence.

We've quickly starting seeing an increased focus on a different solution, which was a focus on ethics and creating ethical frameworks to help guide the development of human rights.  That was a great development for us, but it really strengthened our resolve to focus in on the role that human rights could play in the particular space to benefit industry developing and deploying AI technologies, but also to protect the rights of members in the community.

What we notice from the sorts of solutions that have been trialed and proposed were that they were nonbinding sorts of ethical principles, there wasn't much normative heft in terms of the ethical frameworks being deployed, there was a lack of precision in the content, framed at a high level of generality, and technically minded participants that we were speaking in consultations would often end up coming back to a question of oh, well whose values are we really talking about here, these values are all culturally situated, all relative, and my values may be different from your values.  And what is in the right to do as well is really embodying fundamental values is that they represent agreed values and normative framework they sit within allows to us apply those values in particular contexts.

This is all I'm sure preaching to the choir, but I think it's relevant in the way we've talked about a AI Safety Commissioner because we're focused very much on a body which is focused on protecting and promoting human rights specifically, so it's not just me simply preaching human rights 101, but it's really central to our thinking and how our thinking evolved to the need for regulation and what kind of principles would underlie that regulation.

In turn, that helps to build trust in the community because I know there are agreed standards which are being protected by a leadership body.

In the domestic political context, our thinking also evolved around a couple of big developments in Australia.  We were focusing very much on AI decision-making.  There were a couple of examples of use and proposed use of AI decision-making systems in Australia, which garnered quite a bit of attention.  One is something that became known as robo-debt collector.  It was an automated process to collect payments of welfare benefits.  It collected half a million erroneous debts but had had automatic decisions that had to be repaid against citizens whom it simply wasn't proved had been overpaid benefits.

That was a rules-based tool rather than a machine learning tool, still within our sort of concept of an AI tool.  It was deployed quickly but it was erroneous and also the deployment turned out was unlawful and there simply hadn't been enough thinking in the particular department, which is the department obviously focused on delivering Social Security, to what or how they should be thinking about a tool working, thinking about if it's lawfully based, how is it affecting the rights of citizens, how transparent it is, how much of a right have people been given to challenge decisions affecting them in a timely way before they're negatively affected by the use of technology, and so there was a bit of a from our perspective, a capacity issue there that perhaps led to that deployment of that particular tool.

A second sort of big example which perhaps I'm more aware of fell within primitive work we were doing at the commission and the Australian Government proposed some legislation to allow facial recognition technology to be used in Australia.  Some of those solutions would have been available to private industry, some of them just to law enforcement and national security services in Australia, but it would have involved building a national database of facial images.  It would have been automatic enrollment for anyone that had an identity document in Australia.  It would have looked in space, territory, federal databases of identification information, and allowed both one-on-one verification requests, identification requests, but also this legislation which is drafted extremely briefly and very, very generally would have authorized theoretically realtime many to many identifications of unknown people in almost any context by national security services, for example, but every agency could have used both parts of the bill, but certain law enforcement and national security agencies could have.

And that was push through relatively quickly, or was going to be, but it's one of the first national security pieces of legislation that I can recall that was essentially blocked at the committee stage in parliament because of the concerns about the lack of safeguards built into the legislation.  It would have built that solution, and it was really the fact that there weren’t any significant safeguards and there didn't seem to be much consideration.  The list was made public about how the safeguards could have been approved, why they weren't present in the legislation, and that legislation has now stagnated for I think over two years since it was first proposed., which has actually been a worse result I think to the government and for law enforcement than developing a more protective solution in the first place to enable legitimate uses of those technologies.

And so, again, perhaps that's a sort of AI example, but it's an example of the way human rights perhaps were incorporated early enough within the agencies that were proposing that particular tool and the legislation that was going to enable that particular tool.  So, our proposal for --

   >> MARK HODGE:  Maybe just a minute to outline the proposals on the safety commission.

   >> JOHN HOWELL:  Yes.  I'm sorry about that.  So very briefly, the idea of a safety commissioner is really a proposal for a leadership body.  It's framed partly because we were thinking about how could AI perhaps be regulated.  We realize that there are already a lot of regulators throughout the industry, and we realize that government is reluctant to impose new layers of regulation, but that regulators needed support to apply their existing powers to the kinds of tools which using AI solutions.

We also realized there was a need just for the regulators but for government and industry that are using developing, procuring and deploying tools to gain a better understanding of how they could do that in a human rights way and what the impacts of those tools might be and how they could ameliorate those potential impacts.  The idea of a safety commission is really a body that could build that expertise, disseminate that expertise, build capacity, and provide ongoing support, particularly to regulators that are regulating subject areas where those tools are particularly likely to be used.

   >> MARK HODGE:  Thanks, John.  It's something we had common in general in the human rights space, internal coherence across agencies around what it means to protect rights in the particular area of responsibility so in -- so that's a good example, thank you, and just John in a moment as we wrap up, maybe you can tell us the punchline as well and where that person might be going and how the government is reacting to that proposal to see where we are from it.

I wanted to ask and we do have a question from participants, so I'm going to raise that question in a moment, but firstly to turn to Stephanie, Phil, and Gary who I know you had a few Internet problems and John, but anything that sparked for you in terms of something that you want to react to or build on or something in your initial opening that you wished to layer into?  Maybe a couple of minutes each if you would like to do that?

   >> STEPHANIE HANKEY:  Should I go again or just jump in?

   >> MARK HODGE:  Go for it.

   >> STEPHANIE HANKEY:  I just wanted to respond to what Philip was saying about states building their own technology, basically.  I think it's a really important question and it's so difficult to -- it's so ideal and we might even want it but it's such a difficult question to realize that the engineering power and money and expertise is not with the governments and what we've seen again and again in the pandemic and I think his part about learning lessons from COVID is a really interesting one, is that there are so many failures.  I think if you look at the news, it's literally littered with examples of applications, for example, contact tracing applications that were funded or developed by governments or governments in collaboration with small companies or academic institutions but failed, and failed because nobody used them or failed because they didn't work very well, or like in the case of the UK, they just the UK example, they failed to contact 15,000 people that had it.  So just massive, also there have been security breaches and so on, so it's a good idea and important question, but I think the reality is the people have engineering power and expertise and that poses a challenging question when you look through it and I'm sorry you asked at the beginning, but I think it's because these technologies we think about in the context of the pandemic, but actually they're the same technologies that will be used and are used in any other kind of disaster or crisis in terms of the underlying technology, obviously the application is different.

And so the precedent that you kind of set in this context, I think is really important and I think the problem, and it's really important to look at the questions that we've got here and in the context as well of your mark in terms of what principles we need, why is it that for example, this example I gave, why isn't there more transparency, where is the human rights impact assessment, all of these things that need to be there.  We have to acknowledge on top of that that the pandemic is not just about health or technology, but it's also a political issue, and the responses are political, and so in my view the choice even to use a technological solution is a political and ideological response, and which is a little bit, I understand outside the remit of this but not in the context of how it impacts human rights.

When you have states decides that it's okay to use surveillance companies to track whether or not people have coronavirus, is that okay, I'm not sure it is.  They could have chosen maybe a different company because you know the perception and precedent that that sets, and again these extraordinary relationships going on like the example Gary gave around the maps, not just only Apple but Google looking at the trends, and it's extraordinary thinking about a technology company telling a country where its people are.  You know, we're relying on these companies to tell us whether or not people are staying at home or not, it's an extraordinary relationship and power dynamic as well because governments need those insights they don't have them and don't know if people are staying home or not, the only thing they can do is put people on the streets which is no more effective than a Google trends map or Apple trends map and same for the citizens it's like they put 100 million pounds into the moonshop project which is the budget for the health system for the year these are political decisions.

   >> MARK HODGE:  Thank you, Stephanie.  I knew we would want more than an hour on this panel, but I think it's helpful to layer in that complexity and the reality of the political nature of some of the views and choices that governments are making into the conversation.  We can't ignore that.

Gary and Phil, I'm going to give you guys an opportunity to come in next.  I'm also in the interest of time going to flag the question that we've had from the Q&A from the audience if you are able to weave it into your response, great.  If it's not possible, don't worry but I wanted to put it out in case anybody wants to comment.  It's really the question of how engaged are, you know, Asia sort of big tech platforms on these questions around sort of privacy, security, and developing ethical codes, right?  I mean some of these challenges are clearly, you know, the pandemic is global and these challenges are global as well and how much are we seeing movement, I suppose around governments there, but also companies?  But don't fall only to that kind of angle in your comments here.  I want to open it up to Phil, and Gary, I think you're with us on your phone now, but did you want to weigh in at this point if you can hear us?  I can see your lighting up.  Your name is lighting up but we can't hear you.  Phil?  Why don't you go next and then we'll see if we can get Gary back?

   >> PHILIP DAWSON:  Sure.  Well I just want to respond to Stephanie.  I think, you know, I definitely agree with all of that.  It's highly idealistic to imagine governments being able to in the near future maybe even ever rival the engineering power and resources of Apple and Google.  Especially with this type of speed and scale in the matter of months.

Thinking -- I think it's maybe a bit more realistic in the context of smart cities for governments to build simple applications that are, you know, that are, that they can control that the genesis and the governance of, I think that's probably more realistic.  To have relied upon governments to come up with this type of an app in the pandemic, you know, that would never have happened.

I know I've seen a lot of criticism globally, but especially in Canada that, you know, just like what you were saying that, you know, governments essentially allowed two of the biggest companies in the world to dictate the parameters of one of their responses, and I think I would say that while I agree with the points you were making in kind of a gentle pushback, I would just say that I think a lot of it also depends on how you are explaining what this tool is really going to achieve.  If it's being sold as like a critical solution for the pandemic and in like our best hope of reopening the economy, or if it's being -- if it's being described as one of, you know, a dozen approaches that the government is taking across many sectors and being transparent about the weaknesses of the app and the fact that without uptick it may never actually succeed, and if it doesn't succeed, they'll shut it down.  You know, one of the terms of -- one of the terms of reference in for the app in Canada is precisely that, that there will be advice from the external advisory council and public recommendation on if need be on winding down the operation of the app, whether there is no need for it or whether it just hasn't been useful.

So those things, at least, have made me feel that it's a bit more responsible of an approach to describing the real potential or opportunity or even value of the app and I understand that, you know, not everybody will internalize the message that way, and people might over-rely on it, and it's complex.  And I know we just have a few minutes, so I just want to respond briefly with that and see if anybody else has something.

   >> MARK HODGE:  Thank you, Phil.  I'm going to add, in the interest of time as well, so I'm going to see if Gary and John if you want to comment on anything that you've heard and share any reflections?

   >> GARY DAVIS:  I have tried.  I'm not sure if you can hear me.

   >> MARK HODGE:  We can hear you.  You go, Gary.

   >> GARY DAVIS:  You can?

   >> MARK HODGE:  Yes.

   >> GARY DAVIS:  I think it's a bit delayed, but obviously I don't disagree with many of the points that were made.  I think in relation to the contact tracing initiative particularly, we weren't seeking to be involved there and there was no commercial imperative to be so involved.  It was a decision made by, and I can speak for Apple, obviously, in response to the level of contact that we were getting from public health authorities and the unique circumstances of that crisis, but we've never faced such a request before, actually, which I think tries to differentiate it from everything else.

And I hope, actually we never face the circumstances they would lead to a similar request because they were unique circumstances of a global pandemic, and I'm hoping it does not happen again.  But certainly, if the same circumstances arose, I believe we would be available to help again, but really, they were -- (lost audio).

   >> MARK HODGE:  Okay.  I'm going to -- I'm not sure Gary was quite finished but it seems like he's on mute suddenly, so I apologize, Gary, I'm not sure what's happening.

John, I would like to just give you, you know, just to maybe end with sort of one piece of advice around, you know, through the just capacity building, implicitly there is a question of resource but also capacity and knowledge within government, and I just wondered if you could speak to that quickly and whether the Safety Commissioner idea is focused on that idea of capacity within different agencies?  And then I'm going to have to I think draw us to a close and say thank you to everybody, but thank you, John, you can go next.

   >> JOHN HOWELL:  Sure, Mark.  Perhaps I might tie that question in with a couple of comments from the last questions on the panel.  For example, Stephanie's point about, you know, sort of new uses of technology being deployed or technology being deployed because there is a crisis and justification for it.  I think that is a comment that goes beyond technology.  I think governments are implementing all sorts of measures now under the rubric of, you know, there is a pandemic we need to deal with and they'll have longer lasting impacts.  For example, in Australia there is proposed legislation to enable the military to be called out domestic to assist in responding to the pandemic and other similar situations, and those will be increased out of the military but would survive well beyond the pandemic, indefinitely, and that has big civil rights implications.

But perhaps the way NHRI such as the Human Rights Commission can play a role in decision making is simply by monitoring proposals to respond to the pandemic and attempting to insert sort of human rights discourse into decision-making and reminding decision-makers of the need for necessity and proportionality in implementing measures.  We do need measures at the moment to respond to the pandemic, but we also need to make sure that we think about the broader human rights context, the NHRIs are generally independent but have a close role to government, and I think a place to be a voice in that decision-making.  That's a little separate of the question of capacity.  We like everyone else, you know, sometimes don't have the technical capacity to be experts on every tech question, and the safety commission might be a slightly different role for us but might be able to build our capacity as well.

   >> MARK HODGE:  Great.  Thank you, John.  We are at the end of our time.  Actually we're a few minutes over our time so I just wanted to end by saying, obviously, a huge thank you to John and Phil and Stephanie and Gary for leading and within an hour trying to unpack the complex space of where the state and private sector interfaces on these issues, and then how we both rely on expertise of the private sector, create oversight of it, build the capacity of states in that space.  I think the particularly interesting question of, in a moment of crisis, is that where we have opportunity to create more visibility and rigger versus the time where we're not paying attention maybe and it's sort of happening, and maybe it doesn't happen in just one go but maybe there are other many governments approaching companies in many ways day to day for different functions of government.  So, to sort of look at that I think is an interesting thing that we could reflect on as we move forward. 

Within the B-Tech Project, these types of sessions are really for us to begin to kind of hone our thinking and identify the questions that are particularly of interest that we should try and unpack.  So, we're going to be taking this away, and we'll probably do a small write-up with the help of the panelists as well to make sure we crossed the key angles.  We will be releasing quite shortly just a sort of foundational paper on this area of work that will point to, no doubt, riffing of today's conversation where the opportunities for deeper dialogue and longer than an hour are to unpack some of these questions but also opportunities to try and push for solutions, I think.  And in short term, long term, right, some of these things are going to take a while when we talk about government capacity because we know it's a long haul, often, and questions of resource, too.  That's my effort to quite badly sort of summarize things.

There will be a short summary report that will far more eloquent in that regard.  So just thank you all.  Please stay in touch with each other and also through the B-Tech Project we hope to really go deeper into this question over the years ahead.  Thank you and enjoy whatever time of day you are going to enter into now.  Thanks for joining.  Take care.


Contact Information

United Nations
Secretariat of the Internet Governance Forum (IGF)

Villa Le Bocage
Palais des Nations,
CH-1211 Geneva 10

igf [at] un [dot] org
+41 (0) 229 173 411