IGF 2022 Day 2 WS #224 Ethical and legal boundaries for OSINT practices

The following are the outputs of the captioning taken during an IGF intervention. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid, but should not be treated as an authoritative record.

***

 

>> CAROLINA BOTERO:  I think we are starting.  Hello, everyone.  Thank you so much for being with us.  We have a great panel today, and with a very interesting topic.  Eduardo, should I go on?

   >> CARRILLO EDUARDO:  Yes, you can go on.  I'm here for the on‑site moderation.  We have some people if the room but I think it's best to start because of the people that have joined online.  The floor is yours.

   >> CAROLINA BOTERO:  Perfect.  Thank you very much Eduardo for help over there in Ethiopia.  Thanks a lot for all of you joining from your houses and inside of Ethiopia.  We have a very interesting panel today which has a very long title that I will not repeat.  You all have the agenda.  I will just want the top point out that we're going to talk about OSINT, the Open-Source intelligent research tool that exists everywhere and how and the dilemma that it imposes.  It has become basically a buzz word today like fake news or artificial intelligence.  It represents many things, and hardly one can understand exactly what it is, but we need to talk about what it is and especially the problems it can pose.

OSINT is problematic not because of what it does but because of who uses it and for what they use it.  The purpose they use is everything innocent.  The basic activities that is behind OSINT is what is called Open Source research, and this activity is legitimate, it's a legitimate tool that depends on a specific content, that it is the content that allows us to measure legality.  A way to approach OSINT is to think on specific cases, and this is what I want to do today, to propose to you a series of cases, to pick at your brains on thinking of when and why OSINT is what it has become.

First, I would like to point out some private cases.  For instance, when we seek data to inform decisions, think on hiring person, that is perfectly okay.  But there are kind of weird or tricky uses.  For instance, when you use OSINT to discriminate, to decide not to hire someone because of their race or their religious or political affiliation.

We will look for data for public accountability on human rights abuses, for instance, that is something that they have been doing, the tools have been doing very well and it is a great tool for Civil Society during, for instance, social uprisings.  It is not very well taken by many, and of course it generates some discussions, but it has become an important tool to show human rights abuses.

When you look to show advocacy projects, when you look to see to inform adversaries are, who the others are, this is very interesting, important, serves us all, but it has boundaries.  When does your activity become profiling of the people and it can go beyond what you're looking for and establish a very accurate profile of people that can affect them.  Probably the cases that are more problematic are those that belong to the state use of OSINT because according to the state, has different scale, has different relation to us as persons.

Let me show you three cases.  For instance, the information that public relations company has and provides to on social media interactions for a public authority, the first question would be how far can it go?  Can it include profiling of political oppositions that are being critical to this public authority?  Those are questions that arise everywhere in the world, and we will have a position, of course.  It talks about how it is but for instance can help the public authority to understand what questions are people doing on social media, how to answer specific populations on things they are taking care of, but of course, it can also be profiling.

What about information research to prevent crime or threats to national security?  We all want organizations such as FBI in the United States to look into severe threat, shootings, attacks, for instance what they did on the specific cases.  That's something that we believe should be done.

But there are other cases such as when Columbian police investigate fake news during a national protest, and fake news meaning basically news where they are not being shown nicely, when they say their honor, well name, or good name is being under attack.

Then the situation is different, and finally I want to propose to you a case when a scene is used to research crime, a crime that has already happened.  We need this as something good and we all want the authorities to have these capacities, but often these exactly the same tools are misused to research, for instance, those who protest calling them terrorists or because they have done things.

Even in Colombia, we have heard of tools that are called to look for geopolitical intelligence monitoring, and what does that mean?  What are the capacities that are our authorities are gaining?

So, for us as I said, a rights organization in Colombia, and I am the Director so when we speak about OSINT in Charisma, first of all it's on the grounds of Open-Source intelligence and it's a series of techniques to analyze data hosted in Open-Source places and basic idea of OSINT, Open-Source research, but when you talk about OSINT, there is another element, intelligence, and then intelligence refers to the purpose of obtaining an offensive or defensive terms.

There are other elements interesting to the way we approach OSINTs.  It is often carried or through use of specialized software and all the information is extracted from public sources, and it may include data protected by the right to privacy.  And here I want to say that we understand the data protection has a role in OSINT, but the main tension that we have identified is with the right to privacy.  There is also another element, the fact that when we do OSINT, we're monitoring the net or using a scraping technique and it is what increases the risk of human rights.  And with this introduction, allow me to continue with our panel, and I will let everybody introduce themselves so they can work on their profiles.  Let's start with David.  Dave Maass, please go ahead with the first insights.

   >> DAVE MAASS:  Okay.  Do you want me to introduce myself and talk for a little bit?  Excellent.  My name is Dave Maass and Director of Investigations at the Electronic Frontier Foundation.  If you're not familiar with us, we're a San Francisco‑based NGO that works on issues related to digital rights and the intersection of human rights, civil liberties, and technology.

I specifically work on a team that does deep‑dive investigations into surveillance technology, which means that I both work in the field of collecting OSINT but as analyzing how law enforcement in the United States uses OSINT, and often stretches the definition of OSINT.

Part of the way that I work on this is we also have a partnership with the University of Nevada, Reno, a University in Nevada where we teach journalism students on how to use OSINT methods if their reporting.

Now, what I tend to focus on at EFF is uncovering where and how police are using surveillance technology.  So, we have one project called the Atlas of surveillance which visit it if you want at atlasofsurveillance.org and I can type it into the chat real quick if people would like it, but this is a database and mapping project to identify which law enforcement agencies are using which surveillance technologies, be it drones or facial recognition or license plate readers across the United States.

And so what we do is we train students to look for Open-Source intelligence in order to build this dataset, so they might be looking through news articles and press releases, this he might be going through government procurement documents, so purchasing documents and receipts.  They might be going through meeting minutes and agendas.  A lot of what we do is teach them how to use advanced Google techniques and j just putting in a government website and using Googlesite, colon, government website and searching for search terms to bring up documents that may not be easily linked in order to find out information about surveillance technology.

We also do it in a little bit more direct fashion where we're not just mapping out which police departments have technologies, but like right now we're doing research on the U.S. and Mexico border and trying to identify the exact location of surveillance towers being used for border security.  And by going through environmental assessments and procurement documents, and even just going into Google Street View and just, you know, messing around and moving around and looking for these towers, we've built a dataset of more than 200of these locations of surveillance towers.

Now, one of the criticisms that we often come under from the law enforcement community or the pro‑intelligence community is that what you're doing is showing criminals, you know, what methods there are, you know, giving criminals a map to avoid surveillance, and that's not our goal show and that's not what we're doing, and that's not what is the outcome in the end.  It is some of the criticism that we do have to face in doing the research.

Other things that we do as well are using things like, if we're tracking a specific surveillance company, digging through business records, licensing records, ownership records to really trace the various shell companies until you find out who is responsible for it.  These are traditional OSINT technique, just going to these databases.

One of the sort of complicated areas and emerging areas for people working in the OSINT sector that I would like to point out is that we really don't know what to do with sort of two kinds of datasets.  One is datasets resulting from hacked data, where some sort of, you know, some sort of hacker of some kind has gone in and breached a system or siphoned information from a system and put it online.  That's becoming more and more frequent and the question is whether it's good or legitimate OSINT practices to go in and download a dataset like that and go through it.  It's going to depend on the purpose of personal ethics of your organization and the legal position of your organization, but that's one of the big things that we're ‑‑ you know, that people in this space are dealing with.

The other is leaked information, where you know and this has been an issue that people have been dealing with since, you know, the early 2000 teens with Wiki leaks and things like that and what do you do if there is an enormous amount of data suddenly posted online.

I want to sort of switch to talk to a little bit more about the law enforcement side of things.  When we first started looking at this law enforcement had a traditional view of what OSINT is, looking at what is publicly available online, what is on people's public Facebook pages, Twitter pages, news websites, you know, and there were ones a fusion center in San Diego that was under criticism for spending thousands and thousands of dollars on televisions, and Congress asked them what are you using these televisions for?  And they were like Open-Source intelligence.  And their definition of Open-Source intelligence was they were watching the news, and that is why they needed to spend money on this.

But we've noticed their definition gradually changing, and, you know, there is a question as especially becomes more security in social media, where with is the boundaries for OSINT and where are the boundaries for something that should require a little bit of a deeper legal authority.  So, to give you an example that comes up frequently, is the use of ‑‑ the police use of fake profiles, essentially catfish profiles or sock‑puppet profiles, depending on what your local idiom is, but when a police officer pretends to be somebody else in order to worm their way into private Facebook groups or to start conversations with people or to friend people so they can see their photos.  And is that OSINT anymore?  That isn't stuff that's open on the web, but they're going to claim that because they are, you know, once they're inside it's open on the web.

You also have things like the company Clear View AI which is a face recognition company that has scraped the open web, scraped images of people, journalists, activists, basically anybody with an image of them posted on the Internet from social media, news article, or YouTube video and escaped the biometric information, face fingerprints of everybody and created an enormous database for law enforcement to use data recognition to identify suspects or people in general using what they would call Open Source, you know, images.

And that kind of gets into some of the other things that we're seeing out there, is you know where it might have previously been OSINT was a police officer sitting at a computer doing research and now you have these automated systems that are constantly scraping the Internet, downloading things using algorithms to, you know, identify social networks between people, identify people who might or might not be involved in crimes, you know, to predict where there is going to be social unrest and where people are gathering.  You have these algorithms that are now combined to create these powerful tools for law enforcement.

And then the final thing that I want to mention that we're seeing is law enforcement is stretching the definition of OSINT to include commercial databases that they purchase access to, whether that's purchasing information that companies like Lexus Nexus have on people because they're collecting information for various products or advertising brokers gathering information about people's phones in order to advertise to them, but law enforcement might get access to that data in order to track people to find out where they were at a given time or find out information about their phones in general.  I would say that is not OSINT, because that is not something that is publicly available, but they want to stretch the definition to say that it is.

And I will go ahead and wrap up there and I'm happy to bring in later as we move through the conversation.

   >> CAROLINA BOTERO:  Thank you, David.  As you see, we have a great panel, so Dave took his first 10 minutes, and thank you a lot for accepting and doing both things and presenting the great landscape on how sometimes OSINT can become problematic and has been used as a tool for what in Latin America has been even called cyber patrolling.  So, with this introduction, allow me to bring into the panel Eduardo.  Eduardo, I would ask you again to present yourself and use your 10 minutes to give us an introduction on your approach to OSINT.

>> Eduardo BERTONI:  Thank you very much.  I hope everybody hears me we will.  I'm Eduardo Bertoni, and currently the representative of the Regional Office of the South America for American Institute of Human Rights and used to be the Special Report for inter‑commission of human rights and data protection authority in Argentina until 2021 ‑‑ end of 2020, exactly.  Thanks for this invitation.  What I'm going to present is part of a long working document that I did for The Center for Studies on Film of ex-Argentinian.  And the institute is an academic institution and headquarters in Costa Rica and my office is in Uruguay.

In the paper or in the working document that I have been working on, the first thing that I would like to highlight is that the literary review of Open-Source intelligence on OSINT realized that it is difficult to find a date or author who has proposed this concept.  This is a concept that has been created over time.

The other important thing that I found is that the idea of what the concept of OSINT contains has been an idea handed of year's old and this is because it is an idea referring to the collection of information available in easily accessible sources, public sources in general, in order to carry out intelligence, that I put in brackets or quotation marks, I'm sorry, and there was intelligence during the Second World War, for example, and I mean people that were reading newspapers that were all around the world, you know, and you can see movies of people doing intelligence just cutting some articles in different articles that they found in different media in different parts of the world than some sort of analysis of that kind of collection of data.  That is OSINT as well, that is OSINT, the relevant literature.

Those who dedicate themselves to OSINT can be both today organizations or governmental offices, as well as private sector companies or academic research centers whose researchers for the purpose of their work make use of OSINT practices.

When colloquial languages people refer to the word, intelligence, as an activity and not as a noun, and that activity appears immediately linked to tasks carried out by state security forces in order to prevent both attacks from abroad as crime that can occur in a certain territory.  Maybe this is the activities of OSINT that highlight or that put more attention and concerns in international community.

However, today this intelligence activities are not limited to the above, to this concept because the search for information in Open Sources incorporated other actors for purely scientific or academic research purposes.  Based on the possibility offered by the Internet, information analysis activities with Open Sources have grown exponentially, but and this is important, not just any Open-Source data collection is OSINT.  In my view, OSINT practices must be understood as an output that is reached after a process of specific collection of raw data or existing information, which are the input, which are also later properly analyzed in order to provide information with a specific objective, and this is important to characterize OSINT.  OSINT's practice is linked to the concept of Open Source and its relationship with the constant that the owner of the data can have granted so the data are fairly freely accessible.

Despite how useful it may be to make this classification of information sources, it is necessary to bear in mind the ambiguity that exists in the scope of what can be considered as public information or private information, since within each one at these level, both public and private data can also be found.

OSINT practice, in my view, does not agree in general with the protection of personal data from the different perspectives.  It can be ‑‑ first, it can be affirmed that there is no concern ‑‑ consent from the owner to process the data because they required consent form is not valid.  So, no consent, no possibility to process data.

Or it can be argued that even though the consent is granted, and the consent is valid, it can in no way be applied to the fact that personal data is processed by different actors for different purposes.

So, consent sometimes plays a difficult role in there because the consent might be valid for some specific purposes but not for all the purposes.  So, we have a problem there.  Finally, even in the two previous issues, it can be argued that the extension should be made between the data that exists in public sources because the source is not important but the type of data.

However, the practices under the argument of violation of privacy or under the argument of violation of data protection, can be problematic in light of the exercise of another if you pleased mental right, such as access to public information.  Remember that this is information that we gather and the OSINT practice means they're governing from Open Sources.  So, the question of Open Source and public information is important question there.  Resolving this contradiction, the contradiction between privacy and the right to privacy and the right to accessing information, is what obliged subjects or even to create warranty access to public information on what access to information is.  The problem is that OSINT, or innocent nobody solved this contradiction and this led to a problem with a complex solution.  So, I mean this ‑‑ this is something that we need to work more on.

So, this leads to the conclusion that the impact and fundamental rights of OSINT practices has what we call in Spanish, (?), not clear.  It is difficult in my view conclusively that these practices, these OSINT practices violate human rights if all cases, but even more conflict is to affirm that this does not happen.  The perspective of the analysis of the OSINT practices from the right to privacy on the one hand, and from the right of access to public information on the other hand, demonstrates in my perspective the complexity of the issue that prevents us from holding clear positions regarding the impact they have and the impact they have particularly to human rights.  Thank you.

   >> CAROLINA BOTERO:  I'm sorry.  I was not able to unmute.  Thank you very much, Eduardo.  We have this contrast in how Eduardo has been thinking of the legal framework of OSINT and nuances.  So, allow me now to introduce you to Ksenia Bakina, and I'm sorry for my interpretation of your name.  Please, Ksenia, please introduce yourself and use your 10 minutes.  Thank you.

   >> KSENIA BAKINA:  Hello.  I'm Ksenia Bakina a privacy officer and it's a pleasure to be here and thrilled to be part of this exciting panel.  In case you haven't heard of Privacy International we're a registered charity based in London and work at the intersection of technology and rights and we aim to ensure that technology is used to empower us and prevent governments and corporations from using this technology to exploit us.

So, our goal is to protect democracy, defend people's dignity, and demand accountability from institutions who breach public trust.  So, today I would like to tell you a little bit about PI's work on social media intelligence, which we call Socmint in the UK.  So, we have been aware for a long time that in the UK police and local authorities are using social media intelligence, and in particular whether it is to profile individuals to predict their behaviors or monitor protests.

And back in 2019, we sent out a freedom of information request to over 250 local authorities, as we've also received information that it's not just the police and law enforcement that are using it, but as governmental agencies such as local authorities.

And our research demonstrated that over of 60% of social media authorities were using monitoring for areas such as counsel tax payment, children services, benefits, and monitoring protests and demonstrations.

And this is Open Source kind of social media data, and in some instances, local authorities would go as far as making accusations of fraud, and withholding urgently needed support from families who were actually living in poverties.  And all of this is being done without any comprehensive guidance or internal for oversight on the reliance of Open Source social media intelligence.

So, we found that there have been no quality check on effectiveness of this strategy in their decision‑making, and local authorities appear to adopt this approach that if your data is out in the open of social media, then it is fair game.  Of course, this was done without individual awareness or knowledge, and we've been very concerned because it showed that if they have no processes in place for internal audits, then there is no record being kept of how often this method was being used, whether it was actually effective, and whether it was being used in a way that is legitimate, necessary, and proportionate.

In addition, in March of 2021, we intervened at the European Court of Human Rights in the case of SBart versus the United Kingdom and case concerned gathering and processing of Solomon Bart social media data by the UK home office unit, extremism analysis unit, and by obtaining the information from his Facebook and his Twitter, he was profiled as an extremist.

So, in our intervention at the European Court of Human Rights we argue that had this constitutes a serious interference with the right to respect to private life, and that this goes beyond individual's expectations of how their personal data on social media may be used.

We believe that the use of social media intelligence by public authorities or by law enforcement agencies must be governed by a foreseeable legal framework and contain a series of strict safeguards.  And what we're seeing in the UK is while there is a legal framework for covert social media intelligence, so instances where fake profiles are created or where someone pretends to be your friend on Facebook to access your information.

However, when it comes to information that is publicly available, there is this wild‑west approach when there is no regulations, no sufficient guidance.

And finally, most recently what we've also seen this summer is that local authorities are increasingly relying on social media information to assess age of younger asylum seekers coming to UK when their age is in dispute.

And we've sent out, again, freedom of information requests to local authorities, and we're still in the process of receiving all of the information, but already we are seeing that local authorities are using two techniques to help them in assessing age of young asylum seekers.  They can either review information from social media that is publicly available, or they can make a request to young asylum seekers for provision of their log‑in details and their passwords or to they're social media profiles, which enables them to take all of the data, all of the Facebook history that is available to the user, as well as data from messaging apps such as WhatsApp and messenger.

And similarly to our research in 2019, we again found that there is no guidance, there is no policy, and there is no training being provided to local authorities who are relying on social media data to assess age of young asylum seekers.

Moreover, when the data is collected or the data is viewed, an appropriate adult or solicitor is not always present with the young person.  They have claimed that it is not a situation where they would need support.  However, we've been in touch with some of the lawyers representing these younger asylum seekers and we've been told of situations where an asylum seeker may not have wished to provide all of their data on WhatsApp because they sent some private sexual images to their partners.  At the time, they didn't have anybody there with them to actually explain this to the local authorities, and they didn't have any ‑‑ they didn't have any legal advice at the time either.

So, also again, no records are being held about how many younger people's Facebook and social media data has actually been accessed for this purpose, and while the home office does have some policies respect to age assessments, they do not contain any information or guidance about using information from Facebook or social media for this purpose.

And this raises a number of problems because there is a complete absence of consultations, of independent oversight and transparency.

Further, there is a lack of understanding of social media in general, of how it is used by users, and particularly how it is used by young people.  Because we can understand that many young people may wish to present themselves on social media to appear older or to appear cooler or to appear more mature.  And this raises significant red flags from any inferences that are made with respect to social media data.

As another example, one of the lawyers we've been in touch with has also told us that one young asylum seeker was actually accused of lying on his asylum application because in the asylum application, he has said that he doesn't have any brothers or sisters.  Yet, what they found from scrolling through his Facebook profile is that actually he responded as “thanks, bro” to about 50 comments in his Facebook, which was then used to insinuate he must have been lying about having siblings, and of course the impact of this is very significant because if you are deemed to have been lying in one area of your asylum application, the inference can further be made that you have probably also lied about your age.

So, we believe that this process of using social media intelligence is extremely invasive, highly unreliable, and disproportionate.  And the fact that there is no internal audit, raises the question about how local authorities can judge whether it's actually effective in practice.

And, so just to sum up, we've had concerns surrounding the use of socmint either by local authority or by law enforcement agencies, ultimately because this practice weaponizing the devices and the platform that we use every day, and as has been said previously, it enables groups of users to be targeted and profiled, and in particular those groups that are already particularly at risk, such as women, LGBT people, journalists, human rights defenders, or asylum seekers.

And we believe that the use of social media intelligence can pose a serious threat to privacy and other fundamental rights and freedoms, such as freedom of expression or a peaceful assembly, and therefore threatening the very existence of modern democracies.

And in addition, it also undermines the delivery of justice because justice is dependent on the integrity and accuracy of evidence, which social media intelligence often fails to provide.  That's it from me and I look forward to the discussion later on.

   >> CAROLINA BOTERO:  Thank you very much, Kseni, a.  Very interesting examples and again poses the same questions that previous panelists said, right, that it is very difficult in many cases to distinguish between opponent and those that are for the, such as chat, messenger, messaging apps or even what you said when the authority gets access to the social network itself, then it goes into a part of the platform that is not available for everyone.

Yeah, those are the kind of problems that are now arising by the use of OSINT by authorities.  I would like to know, Eduardo, if there is any question from the audience or in the chat?  If people would like to present any questions for the panel?

   >> CARRILLO EDUARDO:  Does anyone?  So, we have a few on‑site participants.  I'm wondering if anyone has a question that wants to be raised to some of the panelists?  I don't have access to the Zoom.  I think you have that.

   >> CAROLINA BOTERO:  I have one on Zoom.  Ksenia, can you please repeat the name of the ECHRK in which PRI intervene, and the situation the process right now, please.

   >> KEN:  Yes.  So. the case is Salman Butt versus the United Kingdom and I will write it in the chat as well so it's visible.  The application was made last year, and so we are waiting while the court is currently considering the application; so unfortunately, we don't yet know the date of the hearing and we don't yet know the outcomes of the court's consideration.

   >> CAROLINA BOTERO:  Position of a court on this topic, it will be really interesting to follow up.  Thank you very much.  There is another question, please go ahead, Augustina?  If you can talk, that would be perfect.  Go ahead.

   >> AGUSTINA DEL CAMPO:  Hi.  Thank you so much for this interesting presentation.  So, we've been thinking a little bit about OSINT from Sele, and the question that I have for you is, there were a number of different potential uses for OSINT, so if you have a judicial order, then maybe OSINT should have certain guidelines.  Should we be thinking about guidelines or rules for OSINT according to the different purposes that we want to use it for?  Should we be thinking about different rules per subject who is using them?  So, some rules for the private sector and other rules for the public sector?

Should we be ‑‑ how do we categorize?  How do we think about frameworks that can be helpful to set the limbs that we need to set to account for the permissions that we need to account for?

   >> CAROLINA BOTERO:  Thank you very much, Augustina.  Who wants to start?  Eduardo, go ahead, please.

>> Eduardo BERTONI:  It's a comment, I'm not sure if I'm going to answer the question, but let me raise one case in Argentina, and for full disclosure, I was the Data Protection Authority in Argentina at the time when the Ministry of Security proposed some sort of protocol for what we call cyber surveillance, and the protocol was exactly the protocol for police and other law enforcement offices, that allowed them to collect information from Open Sources, particularly from social media, to prevent some crimes.  In that case we opposed.  The Argentina Protection Authority opposed the protocol because it was too vague, it was not clear who was going to validate the information and particularly when the information collected is information that includes personal data and who is going to have access to that information and for how long they're going to store the information, and all of the principles that govern the protection of personal data were not included in the protocol.

So, I am saying that because this was in my view and probably other cases in other countries as well, in my view it was a clear example in which trying to let's put in this way, localize OSINT's practices, and it would be important to do so, under my perspective, in different sectors.  I mean it's not the same when the law enforcement offices and people use OSINT for the objectives, or when researcher in a university is doing OSINT for an academic purpose.  I think both are OSINT practices.  Both are OSINT practices unless we define OSINT in a different way, but the regulation or the limits, and there are ethical limits and there could be also legal limits and they are different in my view.

   >> CAROLINA BOTERO:  Thank you, Eduardo.  I don't know if Dave or Ksenia thought about it, what kind of recommendations can we present for regulations?

   >> CARRILLO EDUARDO:  Also, let me interrupt you for a second.  There is a question from the audience, so we can also.

   >> CAROLINA BOTERO:  Okay.  Let's do that one because we have 10 minutes left.

   >> CARRILLO EDUARDO:  Okay.  It's important then.  I'm sorry, Ksenia and David.  So, go ahead.

   >> AUDIENCE MEMBER:  Thank you.  From Ethiopia, concerning the ethical and legal boundaries, especially in Africa, most legal bodies for law enforcement are not aware of the digital or OSINT or Open-Source intelligence.  What is your recommendation in case of intersection of technology and human rights?  Because the academics are providing only cybersecurity and information security background, but still the legal bodies are following the traditional enforcement.  Thank you.

   >> CAROLINA BOTERO:  Thank you very much.  I would pose the other question, there are two questions on the chat and then we can choose to keep the time.  There is.  The question is where should we all as Internet users then before the possible risks that are seen could have on our fundamental rights?  Should we change the way we use social media today?  That's another question.  The last question is, what could be the ethical limits of OSINT, and how can we control them from citizenship?  Shouldn't there be limits ‑‑ shouldn't the limits be legal?

Dave or Ksenia?  Who would like to take that?

   >> DAVE MAASS:  I can take one of these questions.  So, I'll look at the about where we should stand about the possible risks that OSINT could have on fundamental rights and should we change the way we use social media.  I'm always reluctant to encourage people to self‑sensor because I do want to empower people to express themselves and to do reporting, to do activism online.  But that said, we do have to face the reality that we no longer live in a world where you can most something and only your friends are really paying attention to it and that it just disappears into obscurity, you know, in a few weeks.  We're in a time where it is being mined and escaped and things that you post could have ramifications down the road.

One of the things I find amusing in my work while there are government secrets around surveillance technology, I could go on LinkedIn and find exactly what I want to know by looking at military contractors and police contractors LinkedIn pages where they say what all of their experiences and all the things they're good at doing on particular technologies and then that they probably shouldn't be putting out into the world.  So, I do recommend that people could think about what they post online and how it could be used, and particularly when they're attending protests and deciding whether they're going to be posting photos of everyone at the protest with their faces or choosing to post a photo of people from behind, or seeking concept before posting photos.  That's not to say that I don't think journalists should do their jobs, I think they absolutely should.  But do I think there is a risk of what you should be doing with social media and maybe not, you know, posting things unfiltered all the time because you don't necessarily know how it's going to be used against you or your friends later.

   >> CAROLINA BOTERO:  Thank you very much, Dave.  Ksenia.

   >> KSENIA BAKINA:  Yes.  I'll come in and pick up on a couple of these questions.  I'm also against self‑censorship because this goes completely against our rights to freedom of expression.  And considering the ludacris inferences that can be made by governmental agencies, if you respond to a comment saying thanks, bro and this means you have siblings, you cannot ultimately predict how the data that you post will be used because it can be interpreted in so many different ways.

And so I think the burden should not be on us, on the users to self‑sensor, but because we shouldn't be treated as, you know, as potential suspects in the first place.  But I do agree that especially regarding attending protests, perhaps images and photographs should be posted with care.

And regarding regulations, it's always, it's always a tricky issue.  One thing I can say from my perspective is we need stronger regulations for Open Source because law enforcement and public authorities should not be going on fishing expeditions and taking all of the ebbing tensive, highly sensitive, and personal data from social media.  It's ‑‑ the approach needs to be much more targeted to a specific situation to specific users.  We certainly need stronger audit processes in place.

We need people to be able to seek redress when a decision is made.  For instance, as I said, if local authorities can't even tell us how many people's profiles they've visited and whose decision has been impacted, how do people challenge their completely bizarre inferences?

So, definitely stronger regulations need to be in place for both companies and public authorities using it.  And I think there was the last question about not much information being available on Open Source intelligence from the audience, and I certainly think that it's something that Civil Societies, academics, need to invest more time and funding into research on Open Source intelligence, because whereas covert and technologies do have some regulations, this is an area that seems to be like a Pandora's box and many public authorities adopt this wild‑west approach, which is completely kind of unacceptable to us as a society.

   >> CAROLINA BOTERO:  Thank you very much, Ksenia.  I think we are approaching very fastly the top of the hour.  I just call to anyone who would like to do some closing remarks, one minute each.  I don't know, Eduardo, do you want to do your one-minute closing remark?

>> Eduardo BERTONI:  Not much more than what I already said.  There is a very important link in my view when you are referring to OSINT practices with data‑protection regulations, and the consent that we as data subjects provide when we are using platforms.  So, this is very difficult, and this is something that we, of course, need to work on more but we are working on that area in the field of data protection.  This is one thing.

The other thing is I agree that in the cases in which law enforcement agencies use OSINT practices, they need to be more regulated and oversighted.  I mean this is for me very, very clear, and because the risk to use OSINT practices in ways that nobody will admit in the analytical world is very high.  But, and I'll finish with this, I'm not in very favor to advocate for OSINT practices and regulation in general because that could affect the OSINT practices that are not linked with law enforcement activities and could affect other sectors of society, like researchers that could be limited in OSINT practices if we start doing regulation that could also affect that kind of practice.

   >> CAROLINA BOTERO:  Thank you very much, Eduardo.  Ksenia, do you want to do your closing remarks?

   >> KSENIA BAKINA:  Thank you.  Yes, so I just want to say that I think just as we have privacy in public spaces offline, we need to make sure that privacy in online public spaces is also secure, and I think that consent is certainly problematic because just because I concept to posting pictures for my friend and family to see, doesn't mean that I consent to law enforcement reviewing the images or a company like Clearview scraping images for a massive database.  Consent is always context specific.  And even with young asylum seekers, even though they may have consented to give log‑in details to a local authority and their password, that doesn't mean that their consent was really truly informed and freely given, especially considering the inherent power imbalance between a young and vulnerable asylum seeker and law enforcement and public authorities.

So, this is definitely something that we should also bear this mind when we're considering either Open-Source intelligence or social media monitoring in general.

   >> CAROLINA BOTERO:  Thank you.  Dave, you started so you'll finish.

   >> DAVE MAASS:  Sure.  When it comes to regulating issues related to social media monitoring, one of the measures that we have supported in the United States is this idea of local rules called community control over police surveillance, so when law enforcement wants to acquire a technology tool, such as any of these forms of software that are mining the Internet, helping them analyze social media and other forms of OSINT, then they have to go through a process where they write a policy, they do a privacy impact and civil rights assess am, they submit that to a governing body, an elected body that has to have public hearings and solicit public input before approving or rejecting the acquisition of that technology, and that's something that would apply to social media monitoring technology, whether that's just straight‑up observing or using that to predict crimes or you know other elements.

So, I would say that at least one practical thing that we've been working on already is the sort of like local governance measures.

   >> CAROLINA BOTERO:  Thank you very much, Dave.  I believe also it is very important to find out what the companies are offering to authorities, right.  What are the capacities they're building up and then we can match it with what are the uses.  This is also something that Charisma is working on and we might be talking about it later.  Thank you very much for those of you here with us, thank you very much for being the host of this wonderful panel and all the help from Eduardo there in Ethiopia and allowing us to be in your computers.  Thank you very much.