IGF 2021 – Day 2 – Town Hall #17 Emerging Technologies in conflict and maintaining peace

The following are the outputs of the captioning taken during an IGF virtual intervention. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid, but should not be treated as an authoritative record.

***

 

>> We all live in a digital world.  We all need it to be open and safe.  We all want to trust.

>> And to be trusted. 

>> We all despise control.

>> And desire freedom.

>> We are all united. 

>> BRETT VAN NIEKERK: Hey, good morning or good afternoon.  My name is Brett Van Niekerk.  I'll be chairing the session.  I see there are a few people there in Poland and a few people online as well.  Most of us, as organizers, are online.  Unfortunately, we are often in South Africa and we're not able to start traveling due to the COVID pandemic, so we are currently, unfortunately, all remote. 

So, I'm going to start sharing my screen in a few minutes.  I see some people still walking in.  I'll get my slides up now.  I think that the general housekeeping rules will apply from all the other sessions.  We will try to engage both with those online as well as in the physical venues. 

What we're looking at or hope to discuss today is the impact of emerging technologies within conflict as well as peacekeeping scenarios.  So, it is a town hall discussion.  Anyone is free to make comments.  We will provide a few suggestive questions that will then help us gather discussion and then to be able to ultimately provide some key takeaways as well as any call for actions that may emerge afterwards. 

So, with me is also Joey Jansen van Vuuren from Tshwane University of Technology, and Trishana Ramluckan and Louise Leenen from Cape Town.  So, whilst we are with diverse academic institutions, we are delivering this as part of International Federation of Information Processing, particularly a Working Group focusing on ICT in peace and war.  So, I'll give a little bit of background on the Working Group as well, just so you are aware of the background that we are coming from.  So, IFIP is essentially an international organization.  It's apolitical, focused primarily on ICT, from the name.  So, it is recognized by the United Nations as well as other world bodies and represents 38 national or regional IT societies and comprises of 14 technical committees from a range of security, some focused artificial intelligence and so on. 

So, we are here from Technical Committee 9, which is ICT and its applicability to society, which aims at the understanding of how ICT innovation will be associated with the change in society and also, obviously, then to influence the shaping of socially responsible and ethical policies and practices related to ICTs. 

So, in the Working Group is the new nest in the Technical Committee, established in 2013.  Currently, we've got around 30 to 40 members from about 17 countries.  So, the focus really is to integrate academia, research, industry, government, civic society, looking at how technologies will impact on peacekeeping, conflict scenarios, and security.  If anyone is interested in joining, my email address is up there.  And unfortunately, that email address might be canceled fairly soon because I'm moving jobs.  But in the reporting, we will provide contact details for you to be able to contact us, should you wish to join the Working Group or if you have any further questions. 

So, just a little bit of background information of where we're coming from and what we're thinking in terms of key trends.  We've obviously seen a number of major cyberattacks that have been reported in the news media, including attacks on major cloud systems, which have affected a number of corporations as well as countries.  We have had the Solar Winds at the end of last year, which pretty much affected most network providers, and again, quite a global impact.  We've also seen the effects of ransomware quite often on health care systems during the pandemic, but as well, in the U.S. when a pipeline was targeted with severe social knock‑on effects from that, due to panic, as well as unavailability of the fuel within the Eastern seaboard. 

In South Africa, we had one of our major ports earlier this year targeted by ransomware.  Again, caused massive disruptions for import and export, and had quite a severe social impact as well. 

So, stemming out of all of this, we see cyber diplomacy.  Generally, the main focus is on the United Nations, the Group of Governmental Experts as well as Open‑ended Working Group, both which have provided reports and closed off this year with the next Open‑ended Working Group kicking off next week on the 16th.  And this was unusual for the UN in that the Open‑ended Working Group had a number of multi‑stakeholder participants as well, and I think this is quite important and something we can discuss in a broader context within this town hall. 

We've had the Paris Call for Trust and Security in Cyberspace has recently presented a number of outcomes from six working groups at the Paris Peace Forum.  But most of these focused on norms‑based approaches for responsible behavior in cyberspace.  The challenge is still, this looks at a cybersecurity perspective and the use of certain ICT technologies are not purely cybersecurity‑related.  So, this information within international security, again, very prevalent in the news media in relation to elections, as well as on the internet, with the pandemic, with information being distributed quite widely related to that. 

In addition, new artificial intelligence is becoming more prevalent in international security.  So, there's been a number of ethical regulatory type of discussions.  The UN is proposing a moratorium due to the biases of artificial intelligence in certain applications, particularly related to security.  So, we are looking at things along the lines of, should AI, or facial recognition, be used to make a decision on if someone has performed a criminal act, when there have been cases of potential misidentification?  So, depending on the implications, there needs to be a risk‑based approach to using artificial intelligence, or how it affects various decision‑making processes. 

The problem is, artificial intelligence also underpins a number of other things, including autonomous weapons systems, autonomous cyber weapons systems.  Social media uses a part of identifying hate speech or some disinformation.  So, if your AI fails, ultimately, your defense mechanisms tend to fail.  Again, it's also being used in some cybersecurity scenarios for threat detection, but it's also being misused.  So, the artificial intelligence is being used to generate fake news, deep fakes for information can be used for AI‑enabled cyberattacks, AI‑enabled disinformation or computation of propaganda where it helps control social media and the disinformation campaign. 

And obviously, from the autonomous weapons systems side, again, we are seeing this fitting in broadly.  There is a strong degree of ICT technology, being the fact that there is a degree of artificial intelligence, communications technology, to control these systems.  And again, the ethics, the laws, and the regulations, there's a number of discussions, again, with a number of the others.  A lot of feedback.  This is falling short.  People tend not to be satisfied. 

Then recently, the challenges of trying to determine the difference of how you control a physical autonomous weapons system, to an autonomous cyber weapons system, have been raised.  There were discussions earlier this year at the (?) conference, where they discussed this in a little bit more detail. 

But the key challenges we are seeing is that all of these that we've been discussing or in siloed working groups.  So, there is a GGE for autonomous weapons systems within the UN, but there is also a GGE for ICTs in international security.  Separately, there is a working group for cyber mercenary.  Separately, there is a working group for cybercrime.  Again, different forums for artificial intelligence.  So, this poses a bit of a challenge, when all of these concepts ultimately are overlapping. 

So, this leads on to some of the discussion points we would like to be able to raise.  These are not definite questions.  We can stray slightly from them.  But what we'd like to try and provide as key takeaways or calls for action is looking at, what is the need for regulation regarding cyber as well as AI in conflict use, particularly at an international level, looking at human rights?  So, often, there is a case that national laws take precedence.  Some countries tend to push back against an international regulation, saying that cyber or AI use internal to the sovereign boundaries is their sovereign right, and they govern that how they see fit. 

Another question is, should we actually consider all of these issues when we start talking about regulation under one forum, or should they continue in these separate forums as we currently see?  So, the proposal from one perspective is that we have a single forum with possible thematic areas to be able to deal with cyberattacks or cybersecurity, disinformation, which, obviously, strongly overlaps, artificial intelligence and its use in both physical security as well as cybersecurity, top scenarios, and then, obviously, the physical autonomy. 

And a third area is, in terms of inclusivity, what exactly is the role of developing nations within this?  Obviously, that has been a challenge.  We see your traditional big, we call them superpowers, even though the term is no longer used.  They are the ones trying to drive the agenda in many of these instances due to the fact that they have developed technology.  So, those countries that are maybe slightly behind with technological development, what role do they have to play in these types of discussions and regulations?  What challenges they face with that, particularly bearing in mind they might have other political or infrastructural challenges in dealing with this.  But at the end of the day, they might also be severely negatively affected by the proliferation of, for instance, the autonomous weapons systems or certain forms of advanced cyber weaponry. 

So, with that, I will open up the floor to anyone who would like to start off the discussions.  So, what we'll do is if there's any raised hands online, I will take those.  And otherwise, we can go to the physical venue, if there are no raised hands online. 

>> JOEY JANSEN VAN VUUREN: Joey from Sweden speaking.  I just want to add something that is for me very, very important, is what governments are doing to change, if you use ‑‑ citizens are using social media, that they actually come down and stop the communications to prevent citizens from sharing information on social media.  That means the government then takes the voice away of the citizen, because if something is going against the government.  So, I think we must also take that into consideration. 

>> BRETT VAN NIEKERK: Thank you.  That is a very good point.  And I think that also raises the role of the developing nations because that is very prevalent in Africa, when there is a threat of protest action or unrest.  Quite often, first reaction is to then close down social media.  There have also been reports, I think during Arab Spring, back in the earlier part of the last century, where governments actively actually targeted the social media to try to track and trace some of the perpetrators, in terms of the protests.  So, that aspect of privacy and the human rights does come in. 

In addition, I think also, just from a fake news perspective, again, prevalent in Africa, the strong focus of immediately making any form of spreading fake news illegal.  So, just a hard rule that if you spread disinformation regarding the pandemic, it is an illegal act, which, again, had some Human Rights groups a little bit concerned, is can that now be extended, if the government is being criticized for some reason?  Will they then make that type of criticism illegal?  We have again seen instances under a cybersecurity law where journalists have been silenced and arrested for criticizing the government.  So, I think that's a good point, where the interaction of social media in the broader context of ICT. 

Would anyone else like to add onto that or contribute to that?  I don't see any hands up in the chat for Zoom.  Is there anyone in the physical venue wanting to add anything at this stage? 

>> LOUISE LEENEN: Perhaps I can jump in while we're waiting.  I don't see any hands up.  I just want to respond in more detail to what's in response to your first question.  In terms of the regulation of cyber and AI, because the use of social media will also fall under this umbrella.  How do you actually force a country then to act and have laws that are in line with international guidelines, or you know, at least with the majority of countries internationally? 

So, I think the only way a community can have influence on these types of legislations and acts and responses from governments, especially from developing countries, is through regional and international groups, such as BRICS, G20, the Southern African Community Group. 

I think it is important for practitioners to join Working Groups that have influence on these regional and international groups.  For instance, BRICS has an academic forum.  That is the only way where I can see where researchers and practitioners, individuals can actually play a small role in forcing governments to act according to agreements that they are involved in. 

For instance, one example is the G20 countries adopted a set of AI principles.  I think it was in 2019.  And it means that all those countries, and there are some developing countries that form part of that group, actually now have agreed to adhere to at least AI principles.  Similar types of situations exist for cyber regulations, and even influence on cyber legislation. 

>> BRETT VAN NIEKERK: There's also, I'm aware of similar discussions within autonomous weapons systems.  But again, you do get the disagreements. 

I think in one perspective that the United Nations with the Open‑Ended Working Group that just ended, did try and address that with the multi‑stakeholder participation.  Some countries opposed that.  Others strongly supported it.  I don't think South Africa in particular was an opposer, but they did support ‑‑ I think it was Australia and Canada that were very supportive of that process.  So, it did give the non‑stakeholders a voice.  But what we found, one of the issues arising out of these types of technologies is that it is largely controlled by non‑State actors, if you think of ICTs and the internet in general.  It's controlled by your service providers, generally, a private organization.  The technologies are produced by private organizations, so your big tech ‑‑ Microsoft, Google, Apple, and so on. 

Likewise, you will assume things like autonomous weapons systems being developed by the large manufacturer.  So, there is a degree of private industry that is behind all of this.  So, is it really sufficient, then, to limit things like the GGEs and Open‑Ended Working Groups purely to government representation?  Should we not have more multi‑stakeholder representation in some of the forum? 

Paris Call was very multi‑stakeholder.  It was open to anyone.  Whereas the Open‑Ended Working Group, whilst there was representation it was informal on the sidelines where we could make commentary and the States could take that on board, but they don't have to. It was still the States that ultimately influenced the outcome of the report.  But I think there is also other potential legal challenges in terms of the regulation we do need to look at.  In the absence, I don't see any hands still, perhaps Dr. Ramluckan can take us on some of the legal aspects. 

>> TRISHANA RAMLUCKAN: Yes.  So, just quickly.  It's assumed that for international humanitarian role, they follow three principles.  So, the principles of distinction, proportionality, and proportions during an attack.  This is where it gets complicated because the distinction is between military and civilian, which means the control is still in the hands of, say, a dictator or a commander or president.  And that becomes the main issue. 

So, legislating control or cyber use of AI in conflicts becomes very complicated.  So, in international humanitarian law, they usually fall back on, I think it's Article 36 of the Geneva Protocol 1 of the Geneva Convention, and that states that for the saving of humanities, or the Law of Humanity, which is different from Humanitarian Law, people must not be reactive, and they must abide by this Law of Humanity.  So, it follows through with empathy, et cetera.  But obviously, you're going to have the pattern of how reliable the commander is.  And obviously, there are other issues that you need to consider. 

We do fall back also on the Martens Clause.  So, the Martens Clause was introduced I think in the 1800s with the Hague Convention.  So, while we try to have one governing legislation, it does become difficult for all states to abide by.  So, our fallback currently is on the Geneva Convention Protocol 1 and Laws of Humanity.  So, I think that it is important to ensure that there is a legislation, obviously, a treaty or something to that effect, so you have people or States signing up to abide by it, but then what do you do in cases when some States say they don't want to do it, so they don't want to sign this treaty?  So, it can't be forced.  And I think we are falling back on the Law of Humanity.  So, just leaders need to be careful when using cyber and AI in conflicts. 

>> BRETT VAN NIEKERK: Would anyone else like to respond to that?  I think also what we see quite often is with the aspects of legislation, whilst things like the Group of Governmental Experts as well as the OEWG in terms of ICT international security, do agree that the general Human Rights conventions apply to cyberspace, the problem is, or your General International Security Principle will apply.  The problem is, that is a very high‑level statement. 

When you start going into some of the details, things start to break down.  So how do you define sovereignty within cyberspace?  Is it if you're using the system?  Is it where the undersea cable crosses into your territorial waters?  Is it where it lands on your physical territory?  How does cloud computing, which could be remotely -- you know, on a different continent -- how does that apply to sovereignty if you have critical data or critical services there? 

So, whilst the principles have been agreed to that they will hold, the challenge becomes how do we actually hold them?  There have been a number of proposals, everyone agrees to it, but how do we implement the norms?  How do we sometimes even interpret the norms within specific national contexts?  And I think that becomes a challenge.  Now there's a lot of discussion around confidence‑building measures, capacity‑building in order to try and address the challenges of having norms, as well as even autonomous weapons systems, as well as artificial intelligence.  A normative‑based approach, how do we actually start trying to implement and holding the countries to what they commit to, depending on, obviously, from their aspect? 

So, I am going to post a couple aspects.  People have been talking about certain documents.  I'll post it in the chat for everyone to see, for those that are online.  Again, we can make these available as separate documents within the reporting later on.  So, there's one on regulating autonomy in weapons systems.  So, again, as Dr. Ramluckan was stating, we do have issues, again, of the level of autonomy based from a simple rule‑based system.  So, if you look at maybe a closed weapons system on a ship, if an inbound object is at a certain speed, certain height, certain radar profile, looks like a missile, it could then automatically fire in self‑defense.  So, very, very strict rule. 

Then you take your autonomous weapons systems, which could be potentially used in an offensive capacity.  Now, the challenge is that there was a UN report where there was a friendly fire incident from such a system in Africa.  Now, what are the implications there?  It raises a number of questions regarding, can an autonomous weapons system adequately identify someone who might have a weapon but is in a non‑threatening posture, so weapon pointed down, versus someone with a weapon raised?  Can it determine if firing on a target is within a sectoral risk of civilians?  There is a movie called "Eye in the Sky," where it revolved primarily around that point.  Are they going to use a drone strike against a high‑profile target when there were civilians within potential impact?  And it was really around badgering the legal experts into allowing a drone strike. 

But now, if it's an autonomous weapons system, can it accurately do that with sufficient repeatability in all cases?  Is it more accurate than humans?  Obviously, advantage, no motion, clinical, but what happens if something goes wrong?  So, then you have the concept of a human who can override an autonomous weapon.  So, again, how much detail do these norms regulation go into?  Again, unfortunately, there are certain strong countries that are the ones that tend to try and oppose any form of regulation. 

But then again, also, I think artificial intelligence, in general we need to start looking at that within a security context, because I think regulating artificial intelligence, to a certain degree, may also aid in containing your autonomous weapons systems.  So, things like social recognition, there are obvious biases, not intentional.  But if you train artificial intelligence with a certain demographic, it might be very good with identifying people within that demographic.  The problem is, as soon as you move outside of that, you're going to have some form of maybe overfitting where it doesn't know what to do, so they just get classified as something.  And I've got a student doing that at the moment, where there was one particular set of data that he was using for training, and then any unidentified thing just became classified incorrectly within that specific set of data. 

So, now this, in academia, we can discuss it, and what are the implications for the study?  But in the real world, if you misidentify someone as having committed a crime when it wasn't them, they could potentially go to jail for ten years for something they don't do.  Likewise, now, if you start looking at hate speech or something online, where it is now trying to detect, you know, is your tweet hate speech or offensive?  The way that is done, sometimes you start capturing words, going down to the base of the word, adding, you know, some correlation.  I'm oversimplifying a bit.  But is that actually adequate to identify something as hate speech? 

So, an example that was used previously was, if you say that "Hitler doesn't like Jews," that's a statement.  It doesn't imply any hate speech.  If you say, "Hitler doesn't like Jews, and I agree," now you're providing an opinion.  You're now motivating that that is correct.  So, that now potentially becomes hate speech.  But can an artificial algorithm actually identify sufficiently the nuances of those two statements?  Because the vast majority are articles which get excluded from the identification. 

Another example also ‑‑ and this came out of a study on anti‑Semitism online, which is why we're using these examples ‑‑ was a statement saying something or implies something that is hate speech.  Saying not all Jews are greedy, that is hate speech, because you're implying that the majority are most of them.  Those nuances where you're actually negating a statement is still hate speech because of the subtle nuances of language. 

Now, that becomes a problem within artificial intelligence.  And if you are now using those types of systems to identify a specific person as committing hate speech or not, is that not a problem, because you could, one, either miss something that is very offensive, or B, falsely accuse someone who was actually trying to negate a statement and it was misclassified. 

So, these are the challenges we face when we need to try and regulate these types of things.  And the challenge is, do we do that at an international level or do we still allow a national level, as was said, how do we get nations to follow through on that? 

So, are there any other queries or anyone want to make comments from the physical venue?  Okay, I still can't see any hands there.  I think I see a hand raised right by the door, if I'm correct. 

Okay, moving forward, I think we've only got about 20 minutes left.  If we move off onto maybe some more discussion around the actual forums, is there any specific consideration?  Should all of this be considered under a specific forum?  So, we include cyber weapons and cybersecurity with this information as well as autonomous weapons systems and then have specific thematic tracks, and maybe the United Nations or another body.  Or should, is there any strong opinion that it is sufficient having them done separately as under the current forum? 

>> LOUISE LEENEN: Brett, I'll jump in, if there's no one else. 

>> BRETT VAN NIEKERK: Okay. 

>> LOUISE LEENEN: I think although it would be good to have discussions under one forum, it might be that these topics, the nuances are quite wide.  So, you may get more useful results if you have these specialized forums, but then in addition to that, I think they should be discussed as one group as well.  Otherwise, you might have this siloed approach that you mentioned somewhere.  So, I think it's a bit of both, in my opinion. 

>> BRETT VAN NIEKERK: Okay.  That might be quite nice, having a hybrid option, where there can be a separate forum, maybe including a representation from all of the other forums to discuss this. 

Again, what we see is quite often the NGOs tend to have a technology policy sort of division that will focus on maybe a number of issues, which will then incorporate and be cross‑cutting.  But then again, the challenge becomes the multi‑stakeholder representation. 

Again, the challenge might be, as you said, nuances, but also potentially some of the applicable regulations or legislation.  Because autonomous weapons systems sits under conventional weapons systems norms and legislation, whereas, currently, obviously, artificial intelligence or disinformation doesn't sit under that specific pillar, so there might be slightly subtle differences within the various regulations that could be ‑‑ that might influence various decisions or even, potentially, create some form of conflict between the various perspectives.  Joey or Trishana, do you have any input related to that? 

>> JOEY JANSEN VAN VUUREN: I totally agree, if I can come in, Brett.  I totally agree, because this thing about even the part of the cyber weapons that was discussed a lot, even in the early 2000 years, 2010‑'11, that people could not agree that cyber weapons is part of the weapons systems or not.  And I think with AI, it is actually becoming more difficult to do that, because with the AI added, you can get information that you couldn't have got previously, and it is not as easy, like you said, to determine with AI and the way the models that we're using AI to determine things, it is not like a weapon, you're shooting or not shooting.  AI is very different, and there's a lot of things in between, from shooting the weapon and not shooting the weapon, and that makes it very difficult to decide where it fits in.  Is it fitting in with the normal weapons or not? 

And the other thing that we must also take into consideration is that these things are also used by individuals nowadays and groupings inside countries that is much different, that is more going to cyber terrorism, and it's much easier for them to go into the cyber terrorism because it's not like getting the weapons is more difficult.  It's much easier to use the AI systems to do it. 

>> BRETT VAN NIEKERK: Yeah.  I think, then, maybe that can actually be used to motivate the need for at least one forum that can consider all of us, because now we've introduced the concept of a cyber weapon and AI, but that can also be used now by the cyber criminals, not necessarily within the nation state.  So, having a distinct Working Group on cybercrime versus a distinct Working Group particularly on international security, where all of these types of things strongly overlap is ultimately just who is using it and for what purposes, because problematic. 

And if you look at, I think, the fake news and the misinformation, there have been cases where cybercriminal groups have strongly been involved in the distribution of disinformation.  Even it became the primary means of generating an income.  And I think from a personal experience, and some of us provided input into the Cyber Mercenary Working Group, and we found a number of challenges of how do we define a cyber mercenary?  If we take the traditional approach of what is a mercenary, your cyber mercenary aspect tends to fall down because a mercenary was in a physical conflict zone meets certain criteria.  A cyber mercenary, could they be something along the lines of maybe the NSO group with the Pegasus spyware, signing off that type of technology to non‑state actors?  So, I think from that perspective, you know, there is a discussion. 

I see something coming up in the chat.  Bruce Watson, agreeing on the unified forum because of a very low barrier for the entry.  And I think that, again, is I think a very important point, and also, probably that's leading to the third question, which is, you have a low barrier for certain uses of cyber disinformation and AI.  But when we start going into the autonomous weapons systems, particularly maybe something more advanced, like autonomous drones, there may not be as such a low barrier.  Therefore, certain countries may be at a disadvantage.  So, I think probably we're moving in towards agreement that we do need some form of unified forum to discuss everything. 

Bruce, is there anything else you would like to add?  I think you can unmute yourself, if you would like to talk

>> BRUCE WATSON: No, I think you captured everything there.  Thank you, Brett, for a very interesting discussion.  Thank you. 

>> BRETT VAN NIEKERK: Thanks for your comment there.  So, yeah, I think also going forward, I think we've got some form of agreement on the forum, at least.  Going forward, in terms of developing nations, are there any perspectives on the roles that they can play, the challenges that they can face, obviously often not included in some of the United Nations groups, like the GGEs, which has, obviously, necessitated the Open‑ended Working Group to be more inclusive.  So, when we're discussing this, how can we go about providing more inclusion for developing nations, as well as potential challenges they may face?  So, just an example that we can consider as a challenge.  Political will may be disinterest because it does not immediately impact them when they have other more immediate concerns, like famine, depending on certain countries.  But obviously, quite often you can see they are wanting to be involved, at least, but for some reason, struggle to do so. 

>> LOUISE LEENEN: Brett, one way in which this can possibly be done in a very small aspect is for international institutions, such as a lot of these other global research‑based communities, should pay attention to inclusion of African members from African countries.  I think there is quite a need for strong regional and local AI communities, but sometimes they need support in terms of traveling to conferences or supporting researchers in these universities.  So, I think whenever there is some funding available for conference attendance or research projects, perhaps some of it can be reserved for participants from African countries, just to encourage that community as well to become active in this regard. 

>> BRETT VAN NIEKERK: Not only African, but all developing nations that ‑‑

>> LOUISE LEENEN: Sorry, yes, all the ‑‑

>> BRETT VAN NIEKERK: I know we focus on Africa, but there is very strong representation from Brazil within the one Paris Call Working Group in terms for particularly emerging countries.  And actually, it was the local government that was strongly advocating some of the ‑‑ or promoting the emerging countries.  But again, even within that working group of trying to bring on the emerging countries, we had very limited success.  So, for instance, the South African government wasn't interested.  They looked at it but didn't bite.  A few South African countries or companies did join.  I think there were a few others from Africa, and then, obviously, from Brazil driving some of it.  I think there was quite a strong representation from Brazil coming through and a few from Southeast Asia. 

But ultimately, from the target that had been set of what they had hoped for, that they underperformed severely, that we just didn't get those aspects.  So, there's a comment on the chat there that developing countries value their sovereignty too much and feel that by agreeing to these protocols may lose part of their sovereign powers.  And I think, yes, that is a good point.  We have seen those reservations being raised by certain countries regarding their sovereignty, particularly to things like, I think it was the Budapest Convention for Cyber Crime. 

So, I think, again, that challenge still becomes problematic where certain countries, and maybe even some developed countries, feel that these types of technologies and how they deploy them within their physical borders is their sovereign right, and anything else will be then interference on their national processes, which, again, becomes a challenge because they are trying to use the same international security argument to leave them alone and let them get on with it. 

So, I think this can become a double‑edged sword.  And again, I think maybe developing countries are also maybe caught between some of the major players.  So, we've seen the Cisco versus Huawei type of scenario, where certain providers are preferred by certain countries. 

Now, for a developing nation who is trying to sort of get the infrastructure going or hold up their infrastructure, they might be not have the same criteria of how to implement a certain service provider.  So, for them, it might be cost might be the primary factor or some other factor.  But then, at the same stage, that might preclude them from certain other agreements because they are now using what's seen as an insecure or a competitive technology.  So, you get this degree of maybe not conflict, but a sort of juncture within international security that is maybe also trying to push certain countries into making certain decisions which they might not normally take.  It makes it more difficult for them and makes them more concerned regarding their own sovereignty. 

Again, even Joey also mentioned social media.  So, if we even look at the specific policies regarding Facebook and WhatsApp and how those things would run, they don't necessarily align to specific national laws.  But because they're big tech and they've got buying power, they tend to sometimes feel that they can just override national laws and force policies through.  So, in South Africa, with our Private Act, there was a strong act on how Facebook was implementing certain policies, whereas in Europe, obviously, it aligned with GDPR quite strongly.  So, there was a feeling of discontent when we didn't have our own version of the policy that aligned to our privacy laws.  I think that there is that kind of a challenge. 

Now, I saw some participants from Kenya.  I know Kenya's very good with things like particularly your mobile money and so on.  Do they have any input, either on the chat, or they can raise their hands, to get opinions from the rest of Africa on technology and regulation conflict?  I've got a notice, we've got ten minutes to go.  So, if there's anyone else from Africa or anywhere else in the world that would like to add in some comments, do now, and then we can begin wrapping up. 

Okay, I don't see anything, so I think then we can wrap up.  I think going forward, we can say our key takeaways are that there should be at least one forum that considers all aspects of technology within the conflict of security, as well as maybe there is a need to look at regulation with the various challenges, and then maybe the call to action would then be to try and form such a body or a Working Group under the United Nations or a relevant body to implement such a forum or discussion group. 

Okay, there does not seem to be any ‑‑

>> JOEY JANSEN VAN VUUREN: Can I just add something?  I'll put on now my screen.  Now I've got the video as well.  What I think is also very important is we say there is only one group, but we must also say that we said that it can happen that it can be very one‑track‑minded to something like that, so that means when we do the group, it means we need to have very large buy‑in for the group and maybe we've got different sections of the group to take part in that, that it's not just the one group that they say you're single‑minded. 

>> BRETT VAN NIEKERK: Yes, that is correct.  Thank you.  Okay, I think without any further comments or questions, we can close this session slightly early.  I think there's only about five minutes to go.  Thank you all for your participation.  And we will aim to have up our key points and some of the other supporting documentation up as soon as possible.  And then, obviously, the report with correct contact details I think by the 20th of December it should be up.  Thank you, everyone, and thank you to our hosts at the IGF for all of the support.  We can now close the session.  Good day. 

 

     (Session concluded at 1145 CET)