IGF 2016 - Day 3 - Room 4 - WS111: Empowering and Educating the Next Billions of Internet Users

 

The following are the outputs of the real-time captioning taken during the Eleventh Annual Meeting of the Internet Governance Forum (IGF) in Jalisco, Mexico, from 5 to 9 December 2016. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid to understanding the proceedings at the event, but should not be treated as an authoritative record. 

***

>> SHANE TEWS:  Good morning, everyone.  Anyone to wants to join us at the table, this is workshop 111, Empowering and Educating the Next Billions of Internet Users as it specifically relates to cyber and digital citizens.

So just make sure you're in the place you want to be.

So we were discussing this as a group that is in charge of the dialogue that we had kind of a plan A and plan B.  Part of that was, if we had people who wanted to go into breakout groups we could do short breakout groups on specific topics which I can name through, or if we want to do this as a round table dialogue.  I'll do a quick show of hands.  Who wants to do a 15-minute break out group on topic?

No one!  Okay.  I guess we are going to stay at the table.  Great.

So I am Shane Tews with Logan Circle Strategies, visiting Fellow at the American Enterprise Union and worked with digital citizens, looking at how we manage cyber issues on the level of how the individual user has to deal with the challenges, which means we need to look at it from a network perspective as a group, people who are users and your own individual responsibility

I am going to introduce my other colleagues in this discussion, but I want to encourage everyone to participate in today's dialogue.

So directly across from me is Alexa Raad.  She has her own company and has a patent on pattern mapping and how we actually use the Internet and how we can possibly use the Internet more securely.  She will talk about that a bit.  The key issues she will discuss with us today are the expectation and the management of privacy and how we do that with our digital mediums. 

Directly across from me is Scott McCormick.  I like to refer to him as a threat hunter.  He does incident response and work for major enterprises on how we manage the challenges of network.

James Edwards is on the other side of Scott.  He is with Internet NZ and is going to be focusing on trust and the potential for market failure when we use trust as a medium for security on using the Internet

Anybody else want to introduce themselves into this discussion?

(There is no response.)

>> SHANE TEWS:  Okay.  Alexa, why don't we start with you.

>> ALEXA RAAD:  Can you hear me now?

All right.  So we talked a little bit the other night about the issue of privacy and expectations.  And one of the things that I mentioned was that it is first important to define what we mean by privacy.  In other words, if I were to tell you, if I put numbers out there, 11231966, 104, 0322.  It wouldn't mean anything per se.

However, if you knew that one of them was my birthday, the other one was maybe an expiration date of a credit card and the last one was the last four digits of my credit card, in context that starts to mean something.

So privacy in and of itself, the fact that you have digital footprints out there, it's a matter of life.  We live in a digital world, unless you are living off the grid.  You will always leave breadcrumbs.  The issue is: what information about you is out there?  Do you know about it?  How is that information going to be used in context and do you have any control over say so?  Do you know enough and have control or say so over the information?

There is certain information that is out there about me.  And it is private, but it is private for me and I appreciate that because I, for example, being able to serve me the kind of news that I want to hear.  It's pattern matching about my behavior.  But it is actually helpful for me.  I don't necessarily want information about the kinds of products that I may buy or my political views out on the Internet, particularly if I'm a dissident.

Again, it is about information, what is out there, but in context, and you being aware and knowing how that information -- having control over how that information is used.

>> SHANE TEWS:  Thank you.  That's a very interesting context.  Why don't we run through the two other speakers and then we will talk about this and kind of break it into its parts. 

Scott, talking about threat hunting and some of the information that Alexa said, that we have information out there in the system.  I know that we have seen a lot more phishing attacks.  It has become a very inexpensive medium to make revenue on the Internet, and the challenges that we have with cyber.  Talk about that and anything that you suggest we look at from a macro perspective.

>> SCOTT McCORMICK:  You have phishing attacking that have been going on for years now, but at the end of the day you also have -- is it back here?

You also have whaling attacks, going after your C suite executives and being able to map that privacy information that is out there and available about people.

>> SHANE TEWS:  Can you describe a whaling attack?

>> SCOTT McCORMICK:  That's more specific, phishing attacks designed to go after, hence, a CEO, a CSO, a CTO of a company.  You end up with somebody that has maybe a vast fortune that they are trying to go after.  That's all sorts of reasons why people go after phishing attacks to target an individual.  It is an easy way.  We design systems to help prevent that.  There are ways, it's a lot of end user education.  I can tell you CE levels are not the easiest to educate on this.  It should be a top priority.  But when you look at whether it's shopping or whether it's new technologies coming out, being able to track individuals by use of smart devices, things like that, there's a lot of issues wrapped around that.

Then you get into what threats and risks that is to a company or an entity, whether it's an NGO or whether it's a Fortune 500 company.

>> SHANE TEWS:  James?  Trust and market failure.  What do you have to discuss with us and recommend?

>> JAMES EDWARDS: I'm going to break the pattern.  I'm not going to talk with my back to everyone in the room

(Laughter.)

>> JAMES EDWARDS: Hi, I'm from New Zealand and I'll wave the mic at people to come after me as well.  There's market failure here.  People are excited about the Internet of Things and the potential for ubiquitous connections, your wallet, your car, your glasses.  One of the interesting features of this is these devices are made quite cheaply.  You can go in a shop and pick up something for under $50 or so.  As a manufacturer you might be excited about the opportunity of selling those things.  But there is a set much issues that come about when those devices get on the Internet.  They are designed to be attractive and marketable and not necessarily to be secure.  They have a default password.  Maybe they have no Option for changing the password.  Maybe they have no facility for updating the software and it is good practice in any system, particular like one on the Internet, to update the software so you can find and address vulnerabilities over time.

We are seeing hundreds of thousands, millions of these devices going out.  We expect more than one purpose on the Internet at some point and they are vulnerable to being taken over with the default passwords.  This is a large part of what is fueling the botany attacks, the DDOSing among others.  Brian Cribs, security researchers.  There is market failure.  The buyers and makers of the devices don't care about the negative flow and effect for security risk.  My organisation has just done a round table on the Internet of Things.

And who owns the data?  There's copyright licensing and privacy issues.  It's great, as Alexa said, to be able to put together the data in a way that is useful for you.  And isolated it might be harmless, but the permission structure of who controls, who can access, and who owns that data is important to benefiting the users.

It is not clear that the market structures are serving that purpose either.

>> SHANE TEWS:  You mentioned when we were prepping for this yesterday the potential for using geo location as a way to bring in a barrier on IoT devices.  You want to elaborate?  That's really interesting.

>> JAMES EDWARDS: The devices themselves often are made cheaply, the minimum thing that does the job.  And so we shouldn't trust those devices made in that way to have the same kind of security infrastructure you have around the top end smartphone from a reputable manufacturer.

So how do you manage that?  Well, if you've got a smart therm stat -- this is not an original idea.  It's something written in a blog post.

The thermostat needs to be able to talk within your home network.  It doesn't have to talk to someone in Istanbul if that's not where you live.  You can limit the permission structure to geo location.  You can do it in terms of network addressing.  So your router might have a two-track mode where there's devices you trust because they have the security infrastructure to go on to the open Internet, to go on the open seas.  Some boats you run about and don't take them out of harbor and some do have the capacity to go on the open seas.

The same engineering concerns apply to devices, especially those made as cheap as possible. 

Shane, signal me if you want me to --

>> SHANE TEWS:  Has anyone in the room had problems with the IoT device they purchased and used in their own home, office, their own life?

So you haven't had the password user issue?  Scott, have you?

>> SCOTT McCORMICK:  No, but most users would not even know they have an issue.  You trust that device.  You know, for instance, I run Nest in my own house.  I have Nest thermostats and Nest fire detectors.  So I myself monitor that traffic.  I've got infrastructure in my house to do that.  The normal user does not.  So you wouldn't know that.

Yeah, you might get locked out of your device.  You call and have it reset.  Other than that, nobody is going to know their thermostat has been in a denial of service attack.

>> SHANE TEWS:  It is acting bad -- my Nest disconnected when they upgraded the firmware.  I made a conscious decision to let it be disconnected.  You know, it would be cool. I would be at the thermostate and turn my air conditioning on or off.  That's okay, I can be cool when  get to my home.

Anybody else?  Great.

>> Hi, from the Web Foundation.  I wonder if we're putting the cart before the horse.  All these mobile devices, as it were, how secure are they?  By the time the next billion are graduating to the IoTs, we can have a frame of reference around:  did we secure the mobile devices or did we lead to an insecure web?  How can we contextualize that with the devices in the next phase?

>> SHANE TEWS:  The trust starts right here.  You assume trust with this, yeah.

Any thoughts on that?  To make sure -- Alexa, you can probably elaborate on this.  How much does the phone know about you and how can we use that for good besides potential challenge?

>> ALEXA RAAD:  One of the scary things, obviously we are much more reliant on our mobile devices, more so than ever.  However, it is amazing how much information is actually distributed from this mobile phone or this mobile device about us, about our patterns and behaviors, even what apps we use.

If you think about the context of big data and the kind of pattern matching that can be done, it becomes really scary.  A lot of the time what you actually put in and what you see on the other end, consumers have no idea what is actually being broadcast about them.

>> SHANE TEWS:  Can you give us an example?

>> ALEXA RAAD:  Let's say that there was about a two or three years ago not really publicized, but one of the mobile banking apps, when you actually put in your password, it had one of those gotchas.  For awhile that gotcha didn't work.  One of the mobile apps was effectively putting your password out there.  They caught it in time and it didn't make it to the news outlets.  However, for a number of people your user ID, your mobile password, particularly if you were in open Wi-Fi here, it would be easy for somebody to eavesdrop, take that, if they were perfectly situated and be able to commit identity fraud. Take your information and once you have, for example, somebody's banking ID and you can get into the banking account, there's lots of information that you can now take from them.

By the time you know about it, it is a couple of hours at best.  So these are the kinds of things -- in fact, even Verisign did a study two years ago on circle ID where they talked about how much information is actually sent out from mobile devices that can be seen, that consumers have no idea, in fact don't have the education to know, aren't aware.

>> SHANE TEWS:  When the device wants access to my camera, all my contacts, at what point do you think I'm beyond the -- I understand a lot of that is to make the actual app do what it wants to do.  So I need it to have some of that information.  Is there a way to cabin that off?  Is it all or nothing?

>> ALEXA RAAD:  Part of the problem is, a lot of the privacy notices or privacy policies of things, we don't necessarily -- I don't necessarily go and review.  So yes, it is very, very easy, it is -- if the device has access to my camera, if the device has access to my mic, if the device has access to other apps in there, but I don't necessarily know what the privacy policies are.  It does make it convenient for me, but I don't know what they are doing with it.  If you remember it was a surprise for many people when they realised, we were talking about this the other day, that Facebook gets your photos and that they own it.  This came as a surprise -- Facebook is a much beloved app.  People use it all the time and they think they're familiar with it.  They had no idea that their pictures were not their own property.

>> To caveat that, Uber updated their app.  One of the things, I went through the terms of service and up pops a warning saying Uber wants to know your location for five minutes before and after a ride.

How does it know that I want a ride?

So because I pull up the app, when does it start monitoring me?  There's questions and legalities around the privacy of that.  Now when you go on OIS, I haven't looked at Android, it's an always on function or off function.  Now.

Now you run into the security getting Trumped by convenience.  Everybody wants to use the app and wants to be able to have that added benefit of machine learning and AI that Uber is using to be able to use their app better.

So do you turn it off or on?  Nobody understands the privacy rights behind that.

>> AUDIENCE:  I'm going to use the hand-held one.  I think the idea is, anyone who wants to stick your hand up, a mic will get to you.

Do you want to comment, Jim?

>> JIM PRENDERGAST:  I'm the remote moderator.  We have folks in the online room, in case they are not staring at the chat function, if you want to participate online you can enter your question into the chat or if your microphone is enabled on your connection, we can allow you to ask it via the audio system in here.

I'll ask my own question since I do have the mic.

Bringing it back, I know, Scott, your Uber example is beneficial I think to a lot of people at this conference who might be Ubering.  But the next billions are not starting at that point.  They are starting at a point much further back.  Their concerns are not about IoT attacks.  That's the point you were trying to make is for most people in the developing world, this is their computer.  This is their connection to the Internet.  This is their bank.  This is their lifeline.  So from a social standpoint, not necessarily a hardware standpoint, what are some of the things that people need to be looking for, aware of, and potentially educated about so that as they go about the normal course of their day interacting on their phones primarily they don't become susceptible or victims to some of the things that Scott described?

>> SHANE TEWS:  Anybody want to enjoin that?

>> ALEXA RAAD:  If you have an iPhone device, look at the location setting.  Go to settings and look at the location monitoring.  See how many apps actually give you only two Options, which is always on or not.  Really, what they should be doing is giving you an Option to monitor your location at least when you are using the app, right?

>> SHANE TEWS:  You're saying authorize for a specific purpose rather than blanket?

>> ALEXA RAAD:  Why monitor me at all times and my location, where I'm going when I'm not using the app.  The app is simply installed.  A lot of times when you set something up, this is for the user.  You need to pay attention.  You continually press yes, yes, yes.  Actually read the text to see what you are saying yes to.  It may be harder for you to go back and fix that later.

>> AUDIENCE:  (Speaker away from microphone.)

How do we change that?

>> AUDIENCE:  Not using the microphone.  How do you change that?  You know, you're nodding your head.  What stops you or prevents you from actually reading those disclaimers?  Is it because they are too long?  Written by lawyers?  You just want to get to what you downloaded and start using it?

>> I'm from Bhutan.  I was also thinking when you were talking about the privacy policies and all things, when you download an app or even when you register to a Wi-Fi, you have the privacy policies and all that's correct terms and conditions that you have to agree to before you sign in.  Most of the time, including myself actually, you just click agree without even reading it.

I think I don't know what is the problem actually.  I haven't figured it in my own self, actually.  Just maybe it is lengthy or just because you need the app?  You just, you know, click the agree button and that's it.  You don't actually know what is there.  The terms and conditions you don't even understand.

So you are asking, how do we change that?

>> SHANE TEWS:  This is something that lawyers will hate me for, but is there a possibility of doing kind of a gradation where I approve some things but I need a second tier level -- you see where I'm going on that?  Can I turn things on and off?  Sometimes these apps are using background data to help me with the service and sometimes they just feel invasive.

>> I am trained as a lawyer.  A lot of the issues that have come up at the IGF are about you recognize the values that are at stake, as Alexa said, you want to make the information available because it's useful.  Not all the information all the time.

The default is to collect too much.  It is not proportional to the value of use you get.

So how do you make these systems worthy of trust?  The UX for that is terrible.  The UX is set up assuming that by default you want to share everything, and you have to, for each thing that you want to limit, you have to go and deliberately make a choice to opt out of that being tracked and your information being collected and stored where it can be used for a range of purposes, some of them good for you and some of them not.

In so many debates, which is the default and where do you start?  You can imagine it being the other way around.  What would the UX look like where you empower the user to say:  I want to achieve thus, and that's why I have the app or device.  And it gets the set of permissions that go with what I want, rather than what the provider of the app or platform wants me to give.

>> SHANE TEWS:  I want to bring you into this conversation.  Part of this is IGF is about identifying challenges and issues for the macro and the micro.  We see we have a potential user challenge here and you have tried to work this out from a global policy perspective.  Is there -- give me some kind of thoughts on what you looked at when you were looking at how do we do some norms around this?  Because we have things -- when my device goes from here, from the United States to Mexico or somewhere else, do my rights change on that?  Have I signed it away depending on what country it is?  I realise I changed topics on you.

>> I'm going to pick the user who doesn't read the terms.  The reasons I don't.  I want the benefit of the service and I know that it doesn't really matter what I think.  Like if there was some way where we could give feedback, clearly like I want it to be like that, then I would read it and I would annotate and do something.

But I think it's this feeling of powerlessness that I really need or want the service and there is nothing I can do about it.  So I might as well hit accept.  I think that if there is a way to integrate feedback, whether it is in whatever country we are in, that would be really helpful.

The other challenge as we work internationally is language.  Sometimes the terms and services are not translated into local languages.  What is the point in reading 20 hard to understand English pages that are for an English native speaker when you are speaking a different language?  That's hard.  So ways to figure out how to translate very nuanced information in terms and references.  And having kind of an agreed-upon terms because I think as someone who came from policy and not from tech, I didn't always understand what people were actually asking about in the terms of reference or anything around tech.  So I think having common language would be really helpful for normal people who aren't doing tech stuff every day.

>> SHANE TEWS:  I'm thinking wiggle emojis.  Happy faces.

Scott, you mentioned AI as something becoming more previlent in this space.  Taking this to the positive side of the ledger, part of what these apps are looking at are reputational based and predictive.  They are trying to help us by -- Alexa, you mentioned in the case of banking.  Am I somebody who travels quite a bit?  Is it normal for me to be in a space that has an app that has monetization?

How can we help the user use these apps and be more protected?

>> SCOTT McCORMICK:  We tend to throw out a term AI all the time.  Watson is close to true AI, but it is not very common.  Whether it's Amazon or Uber or any of the large companies, Microsoft, LinkedIn, you name the service.  They are using machine learning.  They are taking this in and saying -- years ago I helped develop a technology that said hey, if one of my system admins is supposed to be in the office at 3:00 o'clock in the morning and he's trying to log in, and we see that, put him into a honey pot and see what he's doing.  Make him look like he's on an actual system.

So that is machine learning, though.  That is sitting there and setting the business rules to then learn and know when somebody is supposed to be in the office or supposed to be in a location.  So from an end user standpoint, banking, right?  I've asked plenty of banks around the world about this.  I have an app with a phone.  It knows where I'm at.  If I'm logging in from India and I'm in the U.S., why is the -- or Nigeria or any other country, right?  Why am I being allowed to log in from that location?  Granted geo location issues with IPv6 network, if you're on IPv6, location doesn't exist as it should, as it does in the IPv4 world.  There are technical issues behind it that are limiting factors.

>> SHANE TEWS:  But you have a room full of people that I assume don't live in Guadalajara, even though we all enjoy being in Guadalajara.  How do my apps know that it's okay that I'm here?

>> SCOTT McCORMICK:  Because your device is here.

>> SHANE TEWS:  Physical present makes a difference.

>> Yes, you have an iPad and a phone in front of you.  You can make a business rule and say yes, this is the person's true location.  If my Apple watch is in Mexico City and my device is in Hyderabad –

>> SHANE TEWS: you have a problem.  Researchers found that 40 percent of IT professionals surveys are most concerned about internal threats.  A lot of damage is done by the people you know.  Right?  Insider threat is a huge -- I'll use this for kind of the ultimate is Snowden.  He was the ultimate insider threat, someone who came from the intelligence community.  There was supposed to be stop gaps and checks and balances.  Obviously those failed.  So it proves, I'll quote General Alexander who made this comment:  If someone is determined in getting data out of your organisation, there's no way you are going to stop that individual.  They will succeed.  Granted, there are plenty of safeguards that you can put in place, layers of safeguards that you hope will catch that individual.  If someone is determined to get information out, they will get information out.

>> SHANE TEWS:  James is getting up again.

>> JAMES EDWARDS: Always.   I said I would rather not talk with my back to everyone in the room.  Scott said there are safeguards in an organisation to stop people getting the data and sharing it with those who aren't.

As a global community, there are services catching a lot of data that you don't necessarily want to get caught and there are supposed to be safeguards and institutional constraints on how security services work.

So what is the relevant community?  What should the trust and sharing architecture be for determining who gets what?  Governments just do stuff.  Private companies just do stuff.  Individuals just do stuff.  How do you build the trust that is going to assist in both those who are already online being able to do what they want, but also providing that trust architecture for the next billion, which is in your title?

>> ALEXA: There was an Article yesterday or the day before coming out of the Snowden files that the U.S. and the British spy agencies were able to use, when people got on commercial flights and used their phones.  A couple of years ago they were able to listen in on the GSM conversations, cell phone conversations.

So to the extent -- there is no way that you would have, I would have known that or anybody else in this room known that had it not been for this revelation.  It does bring up the question of what are the safeguards?  What are the parameters?

There are some heuristic rules that we as end users and even developers can do what you mentioned in terms of fraud prevention, right?  If you were here and your IP location says you're in Guadalajara and somebody is trying to use your credit card from a device with an IP that is in Beijing, there is no way that you could have gotten it from Guadalajara to Beijing in less than two minutes.  So that is a fraud detection algorithm that has been used by credit card companies for a very long time

Another one, like you said, there is no AI.  There are predictive algorithms, but they are dependent on past behavior.  And then looking at how that could forecast future behavior and when there are exceptions.  It throws an exception out and says, well, you know, this is out -- this amount is out of your normal purchasing behavior.  So now combined with the fact that you are trying to charge something from Beijing when you clearly were in Guadalajara two minutes ago, that ought to raise the flag up even higher.  So the probability of this happening as a fraud is higher than before.

But that is all we have so far.

>> SHANE TEWS:  So, there is a study I was reading yesterday on the 2017 overall cybersecurity assurance, where we are trying to head.  One of the number one items goes to the point earlier, what they call mobile morass, the challenge of enterprise security and individuals and the risk assessment on mobile.  So ... yes?

>> I had one more comment.  I work for the Nokia, behind Wikipedia and other knowledge projects.

I want to step a little bit back on the conversation we are having and going back to the title of educating the next billion users.  I think we haven't talked about, without sounding patronizing, the conditions in the digital literacy and the digital skills that the next billion people will have.  I think so far the conversation has been around terminology issues and things that revolve around the people that are already connected, right?  The people already online and educated and have digital literacy rates that would allow them to have these kinds of things.

That said, I think we as enablers for participants and influencers in the Internet have a very important role in how we are bringing this very relevant topic to the next billion in the context that they have the digital skills, the digital literacy, that they have and with that understanding, think how it would shape, how we would inform these new people who will be coming online.

>> SHANE TEWS:  I think that's a very important point to bring up.  I think this dialogue is trying to point out that we've all lived through the first generation or the first billion have had to deal with the hiccups of not having security on the system.  So some of these things that we are discussing allow a new level of security to be baked in as we go to the next generation, but they do have privacy challenges.  I guess one of the questions, to your point, how do we best educate the next billion users?  We have a group of people here from all around the globe.  Are there any thoughts about the best way to get to the people that are just coming online? 

Great, over here.

>> AUDIENCE:  Sorry, you can hear me?  Okay.  To Jorge's point, I think we need some principles about how we proceed with empowering the next billion.  One, we cannot afford to use the same patronizing that we have used with Sub-Saharan Africa, Southeast Asia, poor people.  We have to figure out where the sectionalities lie with other injustices.  How do we engage with educators?  How do you engage with those figuring out how technologies are being used for agriculture, for security, that kind of thing?

And the last but not least, the most important thing is, how do you also engage them in the core design of this empowerment, if you will?  I think we are going to miss the mark and end up with either the next billion connected to a very different version of what we are talking about here as well.  How do you make sure the initiatives trying to connect them also make sure they are coming on to the same kind of Internet you and I are enjoying today.  Those elements, we need to design a set of principles that everyone will agree upon.  We won't use the same patronizing approach but try to ensure their knowledge and insights and that they are co-creators of this connection. 

For me that is something that has been missing in many discussions about this.  Of course, it's great that we all want to do good.  It is very, very important.  We have to very much make sure we do not make the same mistakes of being patronizing and condescending in how we go about it.  Design principles that everybody adheres to in the work they are doing it, wherever they are doing it and whatever angle they are tackling it, that would be an interesting next step.

>> SHANE TEWS:  One of you brought up the point that some of this is not specifically technology.  One of the events I attended earlier this week discussed how just energy -- they were assuming that people are 24/7 on the grid.  And they aren't.  How do we get around that?  This particular example showed that they would bring in cars and they would actually have education rounds where people would know to come find the mobile school.

What they were doing is giving them the devices and the energy at the same time.  And it was just a matter of knowing when and where to be so you could be part of that dialogue.  I get your point about the principles.  I hope -- I don't think we are trying to be condescending.  I think we are trying to learn.  How do we best do this.  Not this assumption that my first world person is that person's problem.  And what they need.

I think we also have the fact that a lot of this stuff comes, it's just intuitive, how we think it works, but we don't also understand what the underlying comprehension is behind that.

I'm always curious, where do you step into that?  I watched programmes and find it fascinating, you see seniors who want to be online.  They don't want email until they realise it's the only way to talk to grandchildren.  You get dialogues between ten-year-olds and 80-year-olds who figure it all out.

The best way to put principles and guidance together that is useful.  Back here?

>> AUDIENCE:  I have a separate point to make.  If you want to continue with this point --

>> SHANE TEWS:  No, you have the microphone.

>> I'm currently based in Shanghai, but I'm Indian by nationality.  I want to bring it in perspective that sometimes it is also about the readiness.  When we are trying to connect the next billion, India just demonetized two of the highest currency value notes.  And that pushed the economy to digitalize overnight.  We got a three-day notice basically to take out the money from circulation.  The company did not print enough of currency in lower denominations for people to use currency in terms of print.

So everyone had to use digital means.  There is an app called pay TM, the Indian version of PayPal.  People were forced.  I'm talking about people, not the ones educated.  India is way, way behind a lot of the developed nations in terms of connectivity, in terms of getting access and digital literacy, per se.  Now you have vegetable vendors on the street who do not know how to use a mobile device having to buy a mobile device.  They that also supports the Internet buy the facility to have Internet and pay through that app. 

We are not an economy as of today who is ready to get digitalised, but we are being pushed and coerced into it, which is one way of getting people to learn, because it is forcing them to learn.  It is not always the right way to do it.

>> SHANE TEWS:  Taking people's currency out of circulation makes them digital pretty quickly.

>> AUDIENCE:  It did!  It did!  It is something I feel is unfair.

>> SHANE TEWS:  I mentioned it, so did a lot of these vendors already have bank accounts that they could attach these to?

>> AUDIENCE:  The thing is, you have the debit cards.  Some of them don't have bank accountings.  India isn’t as rich a country as people think.  There are so many different languages and people are not necessarily vernacular or multilingual, per se.  There is a withdrawal limit of $29 a day on the ATM.  That’s for a family.  In India we do not have the one child or the two child policy per se.  It is advisable by the government.  It is called (inaudible).  Us two and our two is the policy for having kids.  But there is a lack of education.  A lot of people do not even have sex education.  So they have a lot more children than you can actually manage.

If you have to sustain an entire family on $30 a day as withdrawal limit in cash, you can't do that.  But there is no limit on money to be transactioned digitally.  If you have a business of any sort, you have to rely on digital means.  Now, I may be taking raw material for business from somebody who does not have any sort of digital literacy at all.  That puts me in a fix.  I have no way to ask that person to pay me, or for me to pay the person because now one of us has a barrier.  And it just is basic coercion.  You are not wanting to use digital economy.  You are not really ready to use plastic money, but you are forced to.  We are not even well connected.  Our Prime Minister as of today is very pro digital.  It is a ten-year plan, but you took it out overnight.  What happens to people in remote parts where they are trying to get Internet access still?  But you do not have money in hand, so you don't have the Internet.  You don't have money in hand.  What happens to an economy at that point?

>> SHANE TEWS:  Since you are living this life, as we are going through -- some of us were in Hyderabad when that happened.  You think about the digital element, but think about people who don't have online ability, didn't have a mobile device.  How quickly is that ramping up?  How is the society managing that?  Is it angst ridden?  Are they feeling it come together?

>> AUDIENCE:  A lot of people instead of giving flak to the government for it, after the initial drawback of maybe two, three days of rage and angst and protests, a lot of people started appreciating the Prime Minister for bringing in something new as a digital policy and taking black money out of circulation.  It was basically done to take out black money.  But if you suddenly think that 1.25 billion people can manage to shuffle around their money in less than 72 hours, which was given, and for people like for me, I'm an overseas Indian citizen.  I do not live in India.  I live in Shanghai.  If I carried any sort of currency with me as emergency measure, if I come back to the country I have some amount of cash, maybe not a lot but some amount, for people staying overseas in a country like China, China did not allow Indians to exchange that currency.  So overnight if I had, say, about $2,000, I just lost that money.  There is no way for you to exchange it. 

So people are not very happy with it, but at the same time there were so many debts.  If you actually read vernaculars, not the national and international media, there were about 15 bankers who died out of the stress of handling so many people coming into the banks.  There are four-hour queues to withdraw $50.  It is just, we weren't equipped to handle that kind of pressure.  Maybe if they are given more time to it, it would have taken six to seven months to get the policy at least working.  But today if I have domestic help at home and they do not even understand the concept, so the first day when it happened, people cried because they had money in cash, whatever they earned, just minimal means of survival.  Suddenly they felt that they are going to losseeverything they had.  They weren't even educated.  The gap was in education.  It was all digital, in a country that is not digitalised.

>> SHANE TEWS:  I'm infinitely interested in this situation with your monetization.

But going back to our topic, I'm interested in lessons learned.  You basically were shocked into a digital system that didn't seem prepared for it at multiple levels.

What is it, is there anything, any take away we can see?  If they had a thought-out plan, how would you manage that going forward?

Obviously, you have infrastructure issues but you are not going to solve those in six months.  The idea, how would you have prepared people for this?  Like you are talking about the vendors.  Would that have helped if they had ramp-up time?

>> AUDIENCE:  Not just that.  The least you can do as a civil society is to educate people while -- we have a lot of vernaculars, we have 35,000 newspapers in circulation on a daily basis in India.  We have 17,000 languages in India alone.  Twenty-six of them official languages, the rest in dialects.  The least they could do is put it in newspapers in steps, you know?  And we are not -- we are one of those Asian countries that has some hostility -- we thrive on journalism.  If you could educate those who could read that you try helping the vendors understand the situation, give it time to educate your house help instead of first thinking of your own self, because anyone from an upper middle class family has a lot of money.  We save in gold and save in cash at home.  People do not necessarily declare their income, their actual income.  There's a lot of black money because we pay 37 percent tax as income tax.

>> SHANE TEWS:  I could talk to you about this issue for a long time.  Let's bring it back to our idea of getting people connected.

Obviously we are seeing what India is going through.  We can take measures about that to bring more people into the digital medium.  It seems -- I will be curious to see as we go through this how that is working out and if it is taking the challenges of the black market away or people have worked around that.  I'm curious to see if you have a point of view on this.

>> No.  I think what is happening in India is basically a good case of what not to do.  Thinking of places where also eGovernment services, critical services are only now increasingly, the investment is on the online services and not factoring in people who are not connected at all.  There are all these -- I think there is a mapping that needs to happen about the context in which you're trying to introduce these digital devices and just connections to the so-called next billion.  Let's contextualise the reality, how does it fare in terms of gender.  Where are the women?  Do they have the time?  Is this something that they are factoring in their budgets?  I have to leave, actually, the most important thing, this cannot be a siloed conversation but it is a great start.  I emphasize the fact of figuring out principles of how to do no harm.

>> SHANE TEWS:  To your point about the papers in India, that won't work in every group.  Where can you get a critical mass of people to educate?

>> I have a point toed a here.  Even if we print in the newspaper, today you digitalised the economy.  You need a phone that has Internet access.  That means it has to be a smartphone.  If I don't know how to operate a phone in English, how do I first understand how to use a device that I have never seen before, I never held before?  India is that kind of a backward country at the moment.  There are remote parts, there are developed cities, but there are other cities that do not even know how to use the device.  You don't know how to use the device let alone an app.  For that app you need a bank account that leads you to a debit account.  There is a lot of education that was expected overnight.  For 1.25 billion of us.

>> SHANE TEWS:  Mark, did you want to get in on this discussion?

>> MARK:  Yes, Mark from the U.K. government.  I think what you need to bring into this debate are those entities and services at the national level that actually can deliver the education.  I mean, the U.K. experience on data protection advice is through our network of citizens at the device Bureaus and also our inflation Commissioner, our independent regulator on data protection is well seized of the issues you have been discussions in this session and produced reports on how to empower the consumer to protect themselves to exert greater control over what they are signing up to when they buy a device.

So maybe, I don't think these people are here at this IGF.  But maybe in the future you can get some of that dialogue involving those people and also at the national level, nationality IGFs.  Maybe that is the place as well to bring in the national consumer advice activists, agents, independent personal data, regulators and so on to get them involved in a strategic plan for education.

>> SHANE TEWS:  Thanks.  The point of, you know, what works where?  You are going to have different levels that are going to work in different areas, but you might be surprised what works cross functionally.

I remember reading an Article about somebody did a test and took something that looked like an ATM machine and dropped it into a neighborhood in India.  They let the kids just play with it to see how quickly they could acclimate.  It was more or less a game.  They came back a week later.  They said, first of all, we speak Tamil and they put it up in English.  We had to learn English first.  They were amazed how quickly, because they wanted to play with the device, that they went and educated themselves.

So it is like you have to look at what they thought was going to be barriers turned out to be different barriers to what they needed the machine would turn off and they wanted it on 24/7 because they wanted to figure out what they were doing with it. 

To Mark's point about the studies, it is always interesting to see how much of this is, is that actually usable device that you're getting back in? 

The idea of eGovernment and people actually using the services, are we spending money on something that is not at the right level of entry?

>> AUDIENCE:  In India we are fast learners, for starters.  Most areas do speak English.  Behind di is commonly and the ATM machine will give you the Option of English, but people do start learning.  They do not expect people to understand ATMs shall, but overnight they learned it.

Out of the bolt of shock, people did learn.  India being the largest democracy, how democratic was that kind of coercion in terms of suddenly enforcing an entire nation that wasn't prepared to use something that they did not have access to.

>> SHANE TEWS:  You were in Singapore.  We were actually there.  It was unique to figure it out.

We have five minutes left.  I was going to do a lightning round with this group.  Anyone who wants to add to the conversation?  James, start with you?

>> JAMES EDWARDS: The thing we heard about in the past half of the session is, there's lots of top-down approaches to educating, and the idea that I can make things accessible and set up devices to do what the users want.  What is the bottom-up approach?  Sticking an ATM machine in a place that hasn't had one before.  That was what we have heard over here and what you're getting at.  What is the bottom-up approach that meets people where they are and discovers what the real barriers are, discovers what the priorities are.

>> SHANE TEWS:  Good points.  Scott?

>> SCOTT McCORMICK:  From the Internet of Things perspective, we should also be looking at the actual people developing the technology.  As we saw in the Diane attack that happened recently, fastme prototyping to get a product out to market, to then get your revenue base built, is not acceptable.  We need the security safeguards built in.  Looking at what happened with Diane, it was a very simple stack that should have been employed, or deployed in that technology, flat and simple.  Again, it goes back on to the vendor.

>> AUDIENCE:  Thank you.  I think.

>> ALEXA RAAD:  I think if we look to polar ends, it's government regulation, government involvement one shape or form.  Bottom-up is self-organizing end users and so forth.

I don't have the answer, but I don't think that you are going to find the solution at either end of the extreme.  One of the nice things that I have seen happen, for example, the underwriting laboratories are starting to provide certifications for some IO devices.  That's good.  To what extent could that be expanded or perhaps across different countries?  And for users, I think it is a tough requirement for us to think that users are going to be effectively security experts.

Whatever is cheaper, faster, better.  If these can be baked into the design somehow, either through business models or through some sort of minimal regulation, to make things for the users that are cheaper, faster, better, it removes the constraint from them to also now also be a security expert on top of everything else they have to be.

>> SHANE TEWS:  Go ahead.

>> Melanie:  To the comment you made about energy, I think we have to remember there are 1.2 billion people that don't have it.  Ouur work on grid and off grid to reach those people for connectivity for electricity, we can pair that in with the Internet.  So we are not just delivering one solution.  We are delivering it all at the same time.  It's much more efficient.  There's major cost savings to doing it together.

>> SHANE TEWS:  Anybody online, Jim?

This was designed to be an interactive workshop.  Thank you for those who interacted.  Even if you didn't verbally participate, one of the things we would like is a feedback loop.  I think we heard that was something that was maybe missing in even parts of not necessarily the policy, but the way that things, the application and design of how this information is getting out.

So if you want to make comments about in particular workshop or any suggestions that you might have, I think that there's an ability to do that online.  If you want to come talk to me afterwards, Mark, I appreciate your point about seeing about getting more government involvement in this dialogue.  Getting everyone connected is important.  If we can do it in a more secure fashion for the next billion users, that's ideal.  In a way, it is actually user friendly, perhaps more bottom-up and less top-down.  Those are excellent suggestions and I wrote other things as well.

Thank you for starting your day with us and have an excellent day at the Internet Governance Forum today.

(Applause.)

(The session concluded at 10:00 o'clock CST.)