IGF 2021 – Day 0 – Event #92 In the search of a golden mean – different perspectives on the regulation of digital markets

The following are the outputs of the captioning taken during an IGF virtual intervention. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid, but should not be treated as an authoritative record.

***

 

>> MODERATOR: I would like to welcome our distinguished guests, Mrs. Katarzyna Szymielewicz, President of Panopticon Foundation.  Good morning. 

>> KATARZYNA SZYMIELEWICZ: Good morning. 

>> MODERATOR: Mr. Marcin Krasowski, Government Affairs and Public Policy Manager at Google. 

>> MARCIN KRASOWSKI: Hello, everyone. 

>> MODERATOR: As well as two guests who are with us online, Fredrik Erixon, Director of European Center For Political Economy.  Hi, Fredrik.  Okay.  I see Fredrik here.  Okay.  He's waving.  He's with us.  And Ms. Marta Pawlak, Public Policy Manager of Startup Poland. 

>> MARTA PAWLAK: Hello, everyone.  Great to see you. 

>> MODERATOR: Hi.  So maybe I will just skip through the primary reason why we have gathered here today.  So obviously, everyone knows regulation of digital markets is on top of the most important agendas globally.  So it is the topic of utter importance for the IMF, the ability of the World Bank, and the EU.  In recent years, only at the level of the European Union, a number of regulations aiming at the regulation of digital markets has been proposed.  So we have seen, actually, since 2018, at least seven ‑‑ and this is a very nonexhaustive list.  We can count together.  So that would be the 2018 and the full applicability of the GDPR.  Then in 2020, P2B and in 2021, there is already Digital Services Act ‑‑ Digital Markets Act, Data Act, Data Governance Act, Artificial Intelligence Act, and 2022 will actually see a number of new regulations being proposed by the Commission.  So these will include the Cyber Resilience Act, as well as the single market emergency instrument. 

  While the number might not be conclusive here, the issue proposing so many regulations is concerning for two primary reasons.  First, new regulations are being proposed before the time needed for the effective impact assessment of the previous ones has lapsed.  This is the case with actually Digital Markets Act and P2B regulations, that both regulate the terms of cooperation between large online platforms and business users.  And second, we actually see the scopes of these regulations overlapping. 

For instance, the topic of illegal content is regulated at the same time in Terrorists Content Online, Copyright Directive, and Digital Services Act.  But enough of this.  My first question would be to Katarzyna is how did we actually get here, and why do we need so many regulations in the first place? 

>> KATARZYNA SZYMIELEWICZ: Well, thank you for inviting me for this discussion.  Thank you for proposing very bold statements to start with.  While I might sympathize with them and even agree that the inflation of new regulations and lack of time for impact assessment is an issue, especially for business that has to comply and invest a lot of money and compliance but also for users who also might have reasons to feel lost.  I really cannot explain why it happens because I'm not representing the European commission that would be, I guess, able to maybe say something more about why.  If you ask me why I would like to see that market regulated, I'm not saying regulated with so many different instruments and in such a rushed way.  Not that, definitely.  But I would like to see that market regulated for a very basic reason.  We have enough evidence collected over the past two decades that the current existing dominating business model of online platforms, which fund their activities with advertising and not just any advertising but more specifically behavioral advertising which requires constant observation of Internet users usually beyond their control and awareness.  That business model does cause harm on an individual level for these users and on societal level for us as community, be that in the shape of disinformation, polarization, radicalization of the debates, and various other indirect impacts that hurt our democratic structures, the quality of public media, public health.  I'm now thinking about digital well‑being being at stake and people getting more and more ‑‑ not just used to but simply addicted to certain interfaces.  Not services but interfaces ‑‑ the design of the interface would be an issue here.  So there are many reasons why I would say we need regulation.  The big question is what regulation and at what pace? 

So I understand ‑‑ my understanding is that European Commission tries to react to these harms that are being mapped as we speak.  And because these harms are revealed not all at once, and we didn't have good documentation, good evidence ten years back when GDPR was in the making and before even EU started regulating platforms as such, but they are revealed when the whistle blowing happens or when there are breakthroughs in the debate, so to say that they are not dependent on the Commission.  And I can imagine the European Commission is under pressure to react to this first that are part of the public debate every now and then.  So they need to show that they care about citizens, and they care about other business entities that might be harmed in these business models by the dominant players.  Right?  Whether they react timely, whether they react with preparation, it's hard to say.  I can imagine that being European legislator, when you have to react, you have to start doing something, and doing something is regulating because that's basically the toolkit you have.  And if you want to wait for impact assessment, if you want to wait for evidence being verified and for very strong analysis, showing what exactly the problem is and how exactly should we react, you might simply miss your chance. 

I often hear comments, critical comments, towards EU exactly on these grounds.  Why do you come so late?  Why you haven't reacted before these harms happened?  And we can always say, well, okay.  Technology is never waiting, right?  We've been immersed, all of us, including businesses, we are all immersed in a certain technological environment where things simply happen.  They've been happening over the past two decades, but only now we see full scale of the harm.  It is -- on one hand, it seems too late to regulate.  On the other hand, inflation of these papers and lack of time for assessing them is hugely problematic.  So how do we go about that?  It's a very critical dilemma which maybe we can discuss today. 

>> MODERATOR: Thank you.  So today we also have representative of big tech.  And I would like to ask Marcin, how does Google find itself in a regulatory environment that is changing so dramatically quickly, and how do you respond to these concerns that we have discussed now? 

>> MARCIN FREDRIK: Thank you very much.  And first of all, good morning, everyone, and thank you for inviting me to this panel.  I think that the large number of regulatory access schemes is a growing issue for all companies.  It's not only for big tech, but all companies that have to comply with raising costs, with additional constraints on their businesses, et cetera, et cetera.  And this is nothing new. 

There is even a saying that, for example, in the U.S. -- in the U.S., they have Gotham, meaning the biggest online platforms.  But in the EU we have produced GDPR, for example.  And I'm not saying that this is something bad because no one is saying we should get rid of these regulatory acts.  Everyone would like to see their data being protected.  Everyone would like to see harmful content or violent content, which is somehow mitigated online, that our kids are protected as well, our consumer rights.  So all of this is okay.  And we at Google, when we look at it, what we see is that what we would like to see in the EU is finally a completion of digital single market.  So we see different regulatory attempts and at different member states trying to regulate the same things.  For example, what is allowed online, and what is banned?  And we have different obligations imposed on us. 

So we have to comply with it on different basis.  We have the same content can be judged differently in Poland and in Germany, and this raises a lot of concern for us because, as you probably are aware, we operate at scale.  So when we see an attempt to harmonize the rules, to provide same conditions, how we operate across different member states, then we obviously are ‑‑ we are happy with it.  And not only us but also smaller businesses because even though we operate at scale and a lot of different issues come at scale, but small companies also benefit from the same laws because they cannot allow themselves to spend a lot of money on compliance. 

There were numerous studies that were actually demonstrating how unjust burden was levied on small enterprises when they had to comply with GDPR.  And so we are seeing more or less the same thing in the big tech world.  And I would need to come back to one thing that Katarzyna mentioned is about the evidence for the ads being harmful to society.  We see this completely differently.  We at Google, we basically ‑‑ we earn – the majority of our revenues come from targeted ads.  And we do not see that we are only benefiting on our own but also a lot of small companies that rely on Internet advertising, they can reach new markets, new customers, sell their own services or products using targeted ads.  And this is not only good for them but also for consumers. 

Just think about this if you are interested in, for example, shoes.  Then I think you would benefit from ads showing you shoes and not, for example, maps.  So I think that because of that, a lot of users basically save their time. 

Same with press.  A lot of independent, local, regional press is mainly funded by ads.  And since majority of news we consume right now is online, most of these revenues come from online advertising.  So when I hear that kind of claims, I do not fully agree with this.  And obviously, there are issues with polarization.  But do they come from targeted advertising?  I am not so sure.  Katarzyna said that there is a lot of polarization right now online.  This is true.  But is it caused by targeted advertising?  I'm not so sure.  The polarization online, in our opinion, in my opinion, stems from mostly from polarization and divisions which are already present in different parts of societies.  And these are only reflected online. 

We can argue that these can be reinforced online, obviously, and this is completely different discussion that we can have and should have about what kind of content users see so they do not end up in these different rabbit holes.  So they can see good content, valuable content and not only harmful content.  But those kind of discussions we are open to have.  But let's do not overreact, and let's not ban altogether targeted advertising because this is highly beneficial for the whole economy.  Thank you very much. 

>> MODERATOR: Thank you, Marcin.  And before I go to our next panelists, I would just like to invite everyone to ask questions when they have some.  So, yes.  But coming back to our discussion, Marcin has already showed kind of a landscape, things that we consider to be digital and to be online do not necessarily always just touch on the large technologic providers, but they are intertwined very deeply with our society.  So they influence on entrepreneurs, on press, and on very many other sectors.  So now I would like to ask Marta, actually, does this premise that digital regulations influence only the big technological players hold true in the start‑up community?  How do these regulations influence the smallest and most normative players? 

>> Marta Pawlak: Hello, everyone and thank you for me and thank you, Marcin, for referring to start‑ups and the price they could possibly pay for implementation of new regulations.  Me and Marcin were, I think, at least on one panel together with start‑ups representatives when we asked them to refer to new upcoming regulations and to try to answer the question, how these regulations target large companies who probably influence them and, of course, always new acts around the corner mean time necessary to implement them and financial costs for companies, which is more harmful for start‑ups permanently struggling to build value with limited resources, of course, than for big companies.  And now in the time of COVID, it's even more painful because many start‑ups’ business models are already harmed by the crisis. 

But coming back to this digital regulations, of course, as Marcin mentioned, start‑ups do not have enough resources very often to spend money on marketing costs, and they are used very widely their tools that are offered by large platforms.  Marcin mentioned targeted advertising, and we examined a lot of start‑ups asking what marketing tools they are using, and the answer was simple, targeted advertising.  So definitely taking away or even limited option of this will be very, very harmful for start‑ups. 

And I can give you, for instance, an example of online supermarket that makes sure that the items sold on its website are secure and have implemented all necessary legal safeguards and measures.  And with any marketplace, it can happen that products can be recalled for several reasons.  And on the principle that what is illegal offline should be illegal online, that implemented by DSA, if I remember correctly, the burden should not be more on online supermarkets, for instance, as they are an intermediary and do not control the products of their suppliers.  So making start‑up online supermarket liable, with obligations and end up making it harder for smaller local sellers.  Imagine a start‑up which allows consumers to review any company or service that a start‑up wants to comply with, new regulations from day one, but the latest provisions discussed in the European Commission and to the parliament would prove difficult, if not impossible, to comply with because establishing a threshold policymakers create a glass ceiling that discourages scaling.  And knowing that even before launching its business, the start‑up will have to implement an internal complaint mechanism or a trusted flagging system, remove illegal content in 24 hours and randomly check its products or services. 

And, of course, it will be not helpful for a start‑up from launching in the first place.  And one more thing to highlight.  I think, of course, at Startup Poland, we all agree that all these digital regulations should be adopted to the new reality.  But they also should be proportionate for start‑ups because start‑ups have different possibilities and different risk exposure.  So we think that new regulations can be big win for start‑ups, but we have to write them, design them to be ‑‑ not to destroy start‑ups business. 

>> MODERATOR: Thank you so much.  So we have heard a lot about the undue regulatory burden that is being imposed on smaller companies.  And as I have mentioned in the beginning, now there is really a plethora of new acts which will cause great compliance costs and basically change certain business models.  So this has an impact on the entire economy.  And now I would like to ask Fredrik, what is the outlook for the EU's economy, keeping in mind the developments that we are speaking about? 

>> FredRIK Erixon: First of all, thank you very much for inviting me to join this panel.  I’m sorry I can't be in Katowice today.  I mean, the question that you put, I think it's a fundamental question, and it can be addressed in many different ways.  I think one of the first things we need to acknowledge is that Europe is now choosing to have sort of a level of restrictions on commercial behavior, on innovation behavior online, which is different from what you can see in many other major markets in the world.  This obviously is going to have a consequence on the decisions by firms where they're going to locate, new innovations, new technologies, and whether we'll invest in new types of commercial behavior.  And I think this is probably one of the one consequences on the economy that European policymakers need to be aware of. 

If you go through, for instance, with the Digital Markets Act and you put on a lot of restrictions on what gatekeeper platforms can do, it's not that they're going to stop doing things.  They're not going to stop innovating.  They're not going to stop trying to use data in much more efficient ways in order to deliver more value to users.  It's just that it's not going to happen in Europe.  It's going to happen elsewhere. 

So you get a reallocation of investment and innovation when you choose to have a level of restrictiveness in Europe that is different from the rest of the world.  And I think that this adds to some of the problems that we already have.  We know, for instance, that many start‑ups in the digital space, they tend to migrate out of Europe once they reach a level when they are going to get more dependent on growth financing and an expansion of uses that becomes more easier to do in other parts of the world than in Europe. 

If we look at, for instance, investment in AI, Europe is far behind the United States and China when it comes to how much money that is being invested in, which is partly a consequence of differences in regulatory approach.  So what we can expect here is that when Europe goes ahead with even more restrictions now in what you've mentioned already, for instance, the DSA, when you look at the AI regulation or when you go into other regulatory proposals, we should expect that this will have a consequence on the choice of firms where they're going to start, where they're going to expand, and how they're going to experiment with new technology and new innovations.  Now, this, to me, is the most fundamental aspect of it, and it goes to the heart of some of the conversations that you've already had, which is that I don't see a point in going ahead with different economic regulations.  I think we can have ‑‑ I think there's sort of a perfectly good case to even improve and sharpen some of the regulations that we have on, for instance, consumer protection. 

We can do a lot more on regulations that concerns consumer harms.  Perhaps even mental harm by having age controls on social media.  But we're not doing that.  We're going ahead with regulations that are much more economically oriented, and they deal much more with the market and with competition than it deals with the actual problems that we are confronted with.  And this is partly a consequence of the fact that we don't have a European policy or a European mandate to do these things.  Morrison mentioned, for instance, when it comes to definitions of illegal content and how it varies between different countries in the EU, that's absolutely true.  And it's a consequence of the fact the that we don't have a European free speech code.  What is allowed to be said in one country may be unlawful to be said in another country. 

And since the EU cannot go in and give a clear legal definition of what constitutes legal content, they try to move with market regulations there in order to achieve the same thing.  So that's where we get the DSA and sort of the threat of big penalties if platforms aren't removing not just illegal content but what could be harmful content and what could possibly be content that could be illegal or harmful as well.  So it's a choice which is being made not because it's the rational way of going about regulation.  It's a choice that is being made simply because we lack the tools in the European Union to deal with some of the more fundamental aspects of illegal content. 

I think this is highly unfortunate because when you move with economic regulations, you do get a lot of economic consequences, exactly those that have already been pointed to by Marta, which is that if there is a threat of a big penalty, if you're not removed content fast enough, what is platforms going to do?  Well, they're going to remove a lot more than they need to remove simply because they want to reduce the risks.  And we have already seen this in lots of different regulations that have been introduced in the digital space over the past ten years, which is that now we have big platforms becoming censorious and removing far more than they should do simply because they're afraid that they may get a high penalty if they don't remove it and because it is going to cost too much to go into each and every content of a posting or of something which has been put online in order to figure out if this is on the right side or the wrong side of the law. 

So that's why we're going to get these consequences that Marta pointed to, which is that, you know, things like Facebook Market, for instance, it's just impossible for Facebook to control each and every thing which is being put up for sale on Facebook Markets by an individual, so they're going to take away the opportunity to do so for sellers that are too small to motivate taking the costs in order to inspect and see what type of products that they are putting there.  Thank you. 

>> MODERATOR: Thank you so much.  I can see that we have some questions.  Could I please ask the I.T. support to open the Q&A? 

>> KATARZYNA SZYMIELEWICZ: (off microphone)

>> MODERATOR: Of course.  Then could you please go forward with your (?) and we will try to ‑‑ yes.  Okay.  So we will -- Katarzyna will have a comment.  We will figure out the questions that we have on chat that were first, and then we will proceed with your questions. 

>> KATARZYNA SZYMIELEWICZ: Okay.  Thank you.  Because I was speaking first, maybe it's not natural, actually, that I'm the only representative of Civil Society, and then came voices from industry.  So I feel I need to address some of the claims and concerns before we open the discussion.  So let me insert three comments. 

First, my colleagues speak about innovation that race to the bottom when it comes to user value started by big tech but joined by start‑ups to innovate in a way that is disturbing for consumers, human beings, that this is how we frame it.  And the key thing here is what is at stake in this race?  If you innovate for the sake of innovating or for the sake of your profit, you have to include the fact that people will not like to be affected by the results of that innovation.  If people endorse the business model, if people have nothing against your business models, they will also not ask for you to be more regulated, right? 

So if I was in the shoes of my colleagues from industry, I would be asking myself a question, what is wrong with my business model that so many people, citizens, consumer organizations, human rights organizations, media, regulators across the world, not only Europe.  Remember what happens in the U.S.  For the last two to three years, we keep hearing very concerned voices from the U.S. including Silicon Valley, including former key investors in online business models who have been at the very beginning of Facebook and others calling for regulations, saying how come we allowed for this toxic business model to develop to the extent we can no longer contain it?  The U.S. now seems to regret they didn't have GDPR before.  Why so?  Is it because everybody wants to innovate and just make more money and only consumers are grumpy?  I mean, come on.  Let's face it.  Like, if people wanted to be exploited, they wouldn't ask for you to be regulated, right?  So if I was industry, on the industry side, on the business side, I would try to imagine the logic of people who actually come and ask for more regulation.  What is behind their claims?  And, now, if it was so easy to isolate major harms and just regulate point by point, we would have done that.  We tried.  GDPR was such an attempt, among many others.  But what came out of the whistle‑blowing of the many debates I mentioned already in the U.S., in the EU around major platforms.  The business model is at stake.  The business model is the issue.  The advertising founding business model, and not because of advertising as such, and I will come back to this.  It's very important to differentiate the algorithms and the logic of targeting people based on their intrinsic hidden virtues, features that they are not comfortable in revealing and having exploited that this is an issue which is common for advertising, specific type of advertising.  Not every type of advertising.  And for profile content.  This is why I conflate the two, talking about social harms, talking about mental harms, about digital well‑being, which is at stake, but also radicalization and polarization that Marcin mentioned in his remarks.  I'm not saying advertising is to blame for this.  I'm saying the same business logic which is aimed at engagement or fundamentally it requires constant user engagement and profiling and trying to engage people actually against their own digital well‑being, this is that part of the business model that needs to be fixed.  And you cannot do it without addressing the business model.  This is why we have DMA and DSA, which is trying to do exactly that. 

Whether this is radical or not, I wouldn't say so.  In the current draft that I hope we all have read from last week, there is nothing radical in these propositions.  Why?  Maybe because big tech was so violently lobbying against it with success.  And this is what I also want to say in this debate.  That if we are concerned about start‑ups and small business, I would talk more to the biggest companies, to the leaders on this market, to stop misbehaving because unfortunately the misbehavior of the leading companies, and they are lobbying against fundamental restrictions, cause harm in smaller players.  Right? 

I hear the voice of concerned smaller industry players, but you pay the price for harmful behavior of the biggest ones who abuse people's data at scale and cause this public outcry that then has this trickle‑down -- kind of negative trickle‑down effect on the whole market.  So I would encourage more debate within the industry against most harmful practices, like it happened in industry and advertising industry itself.  We all recall this very irritating type of advertising that was trying to catch people's attention at any price, with voice, with, you know, just coming with sound and everything, very irritating.  It was self‑regulated.  It was simple to self‑regulate.  Same advertising targeted kits openly.  It was self‑regulated effectively, but that was simple.  Now we have to address something far more complicated, which is covered, nontransparent, invisible for normal users type of targeting that is exploiting human behavior and human vulnerabilities.  This is the problem.  Not advertising as such.  I truly believe that all the examples that Marcin mentioned and Marta mentioned, they could operate legally without any restrictions because these are not harmful examples. 

Let's talk about harmful examples of what needs to be fixed rather than move that debate to extreme like ban advertising or not.  Nobody's saying it's proposing advertising.  Not me, at least.  But let's really address what is toxic in the business model of the biggest players, and through that prevent harmful regulation for the whole market. 

>> MODERATOR: Thank you so much, Katarzyna.  It's actually amazing that you said that because my next question was exactly about that.  So I wanted you to explain why targeted advertising is a threat and why it concerns us.  But I guess –

>> KATARZYNA SZYMIELEWICZ: it's not about targeting as such.  It's about specific type of targeting which is not including users' choice.  So what we propose to be constructive, we say let's fix the opt in.  Let's really do ‑‑ the GDPR already, according to the GDPR, in my interpretation of the GDPR, targeted advertising, based on behavior, requires opt in anyway.  But we all know how opt in is done these days.  Yes?  It is not really any form of consent or at least informed consent.  So if we face that together, if we innovate on that field, I truly believe, truly believe, that industry is capable of innovating here.  And proposing to people some kind of interface that really engages with the user and asks two questions ‑‑ or in two steps.  Of course, we are not talking about pop‑ups.  It has to be designed better than pop‑ups.  So, first, do you want to opt in for advertising targeted based on your behavior?  If not, it only means you will get contextual advertising or advertising targeted based on your ‑‑ on the profile that you authorized.  Yes?  So we are only talking about specific opt in for the most invisible, nontransparent type of advertising.  And then if the user, if the human being affected tells us, yes, I'm ready for that, then we ask that human to specify what features they are comfortable in having targeted at them.  And now we have consumer value, user choice, everything you mentioned.  But truly fulfilled and not just rhetoric. 

>> MODERATOR: Thank you so much.  Could we have the questions, please, from the Q&A on the Zoom?  Could you please open them?  We can't see them here.  Yeah, okay.  So gentlemen in the room who wanted to ask a question. 

>> Yeah, absolutely.  May I have a mic? 

>> MODERATOR: There it is. 

>> Thank you.  Okay.  Thank you.  (?) University.  I'd like to be as polite as possible, but I just need to call out the massive disinformation that was said during this panel.  And I will say it very precisely.  For example, Fredrik said that gatekeepers are not going to stop innovating.  My question based on what really is in Digital Markets Act is do you mean that discriminating third‑party products and favoring your algorithms is innovation?  Do you mean that?  This is the precise letter of the law.  Or that cutting off SMS from app stores is innovation.  This is the truth.  Not some vague speech about harming innovation.  How regulation that is supposed to been antitrust, pro‑competitiveness harm start‑ups?  It's the other way around.  This is fake, what you're saying.  And a final question, question to Marcin.  Belgian Data Protection Authority has just declared that the transparency and constant framework used by AIB and also by Google is not compliant with GDPR.  So when are you going to delete all the data that was captured using this framework which is not compliant with European law, and how do you intend to do it?  What is the action plan?  This is the question from me. 

>> MODERATOR: Okay.  Thank you very much for your question.  So maybe, because the first part was addressed to Fredrik.  So, Fredrik, would you like to answer? 

>> Thank you.  I can do that.  Thank you for the attribution of being fake and coming with fake news.  So, the DMA itself does a lot of things.  Self‑preferencing is one of them.  And other things you mentioned, sort of access to app stores, et cetera.  That's part sort of the broad intention of not just the DMA but, of course, of competition, policy cases which has already been subject to court decisions and court reviews in the EU.  But it does a lot more.  It goes into deal with interoperability issues.  It goes in to deal with use of data, when you combine your own data with data that doesn't come from your core platform service or data that you've been obtaining from a third party.  It goes into provide. at least in the abstract, restrictions when it comes to mergers and acquisitions. 

So the DMA is a very, very broad package.  Now, the question I put forward to you is basically are other countries in the world going to choose the same type of regulation as the EU has done now with the DMA?  And my answer is no.  You can just compare the DMA, for instance, to what is being proposed in the UK.  You can look at the ‑‑ sort of the proposal in the U.S. Senate, which is most likely to become sort of the core proposal for how the United States prepared to deal with platform regulations.  And you find there are significant differences between them, which are going to have a consequence on innovation. 

And by "innovation," I basically mean how you're going to come forward with new type of services with new type of products.  What type of space that you allow for companies to do that and to try to figure out what they think consumers are going to favor in the future.  So looking at sort of some of the potential consequences of the DMA, we don't know because everything is so vaguely formulated in the DMA at this point that it's just impossible to try to figure out what the practical consequences are going to be when you start to apply this. 

It may be, to reference, for instance, Marcin's companies, maybe that it's just going to be impossible to use Google Search in order to find a choice of a restaurant in a city where you get reviews from other people that have been to this restaurant or where you're going to get a map to show you how to get to that particular restaurant.  It may be that Amazon cannot continue with its current model and that it has to become a lot more like eBay in its entire offering simply because the way that Amazon is restricting its uses and the way it discriminates in terms of what behavior that it's allowed on its platform is most likely at odds with lots of things that we can see in the DMA. 

So the entire notion that the DMA itself is going to lead to a lot of consumer value, I think that's a highly debatable proposition.  And my problem with it is that not just that we are most likely going to find sort of a reallocation where new innovation, new technologies, new products will be introduced in other parts of the world but not in Europe, it is also that most likely we are going to reduce the usefulness of platforms in Europe leading to a much more fragmented and also a much more complex type of online market where you cannot sort of compete in ways that you would be allowed to compete if you were doing it offline.  Thank you. 

>> MODERATOR: Thank you so much, Fredrik.  And maybe also just to underline, I think it's important to note here that no one here is disputing the idea and the need to introduce those digital regulations.  The idea is rather to discuss and to see, as proposed in the title of this debate, different perspectives and different voices in order to make those regulations better.  So, so far we have ‑‑ I don't know if there is a question from the lady in the back?  Okay. 

>> AUDIENCE: Hi.  Thanks very much.  I wanted to share my perspective.  I'm coming from the UK, from Ofcom, which is the communications regulator.  But you might be able to tell by my accent that I'm actually American.  First I wanted to point out that the kind of ‑‑ the idea that we could address certain harms such as online harms, which is what the UK is trying to tackle in the Online Safety Bill.  Kind of divorced from or in a vacuum that doesn't necessarily imply economic impacts on companies is false.  So one of the things that we have to consider as an independent regulator is the proportionality of interventions in terms of how they will impact companies.  So you'd be surprised.  Things like age verification can have a huge and disproportionate impact on particularly the smallest platforms because it's something that benefits from economies of scale.  The more users you have, the cheaper it is to have each kind of age assurance or age verification check.  So I just wanted to point that out. 

And I also think that we might be looking at the same glass of water in terms of similarities or differences and saying one's half full or one's half empty when we're thinking about how the DMA compares to parallel legislation in the UK or indeed the kind of thought that is brewing in the United States.  So, of course, there are plenty of differences in the way that people think about it or the history or the case law in which it has to exist.  But I do think that there's quite a lot of really interesting parallels and similarities between the three of the ‑‑ you know, between these three geographic areas, particularly, I mean, the CMA, the Competition and Markets Authority in the UK is quite aligned with a lot of the thinking in Brussels.  And it looks like the FTC is kind of trailing that ‑‑ in that direction -- towards that direction rather than away. 

But I think it's also worth ‑‑ yeah.  I just think we need to really zero in on those kinds of similarities and see what we have in common because I think that's equally as interesting.  And because we are all trying to regulate this global Internet, the kind of ‑‑ the need for international cooperation and regulatory cooperation, it's the only way that we're going to be able to do this effectively.  So if we think about innovation in terms of not just in terms of who can come and exist in the existing economic platform economy, so in the existing ad‑based market, but innovation in terms of what kinds of new products or services or indeed business models can we try to create through things like safety by design guidance, business support mechanisms.  I think our Civil Society colleague made a really great point about innovation doesn't necessarily have to be ‑‑ you know, there is a social component here.  It's not a strictly economic proposition.  And we are seeing a lot of issues of trust in existing business models. 

So as a regulator, it's just thinking about how can you be proportionate on existing economies but also maybe spur innovation in things like safety tech or things like privacy by design.  What does this look like?  And I think by providing guiderails and proportionate interventions, we might be able to help spur that across many different countries. 

>> MODERATOR: Thank you so much.  I really love that intervention.  Especially it kind of got us to the next stage in our debate, meaning how can we get from the stage of complaining about what we don't like to the stage of how can we make those regulations better.  So we have ‑‑ so far we have talked about digital regulations, in general, because it's important that we not only think about, for instance, people are working on DMA and they know only DMA.  But they don't get ‑‑ grasp the extent of changes that are happening.  But nevertheless, we have seen one issue emerging that has actually emerged across a few of these regulations being targeted advertising.

So targeted advertising was primarily supposed to be dealt with in the IA Act, but then it was proposed to be included in the DSA and the DMA. 

So now obviously we have heard also about the problems related to targeted advertising and the risk that it brings.  But I would like to ask Marcin, so how could we actually make this regulation workable in addressing the risk mentioned? 

>> MARCIN KRASOWSKI: Yeah.  Thank you very much for the spirited debate and for the questions asked.  Maybe if I may start from the targeted ads and question from the audience.  So I truly believe that at Google we are compliant with GDPR, but we saw different regulatory activities popping up in different member states, trying to question our compliance and how the new technologies are really in line with the GDPR, et cetera.  But these are emerging discussions, and we are not avoiding them.  We are actually engaging with regulators to discuss this.  We are even proposing measures on our own.  For example, there is ‑‑ from next year, actually, we will introduce a ban on third‑party cookies.  This was actually a ban that is supposed to strengthen the consumer so that its data, his or her data, will be completely secure and will be managed only by Google and not by third companies.  But then ‑‑ so this is truly motivated actually to protect the consumer.  But then on the other hand, then we would immediately be accused of undermining competition and our competitors online. 

So you can just see how difficult it is to balance all of the different interests online.  And we understand at Google that we are a global company and with global presence, and with the size comes bigger responsibility.  So we try to actually measure our impact on society, on culture, on other companies.  And ‑‑ but it is difficult.  And someone asked in the chat, I saw the question whether the right way ‑‑ the right approach would be to basically to focus on big tech, on gatekeepers, on vlogs, whatever the name is, or maybe we should cover all companies, all start‑ups with the same rules.  And as I said, this is difficult because at the same time we understand since our size, we have bigger impact, we reach more people.  So if something goes wrong on our services and there is a harmful content, it reaches wider audiences.  But then we ban it, and then we see the same content popping up on different platforms.  We saw this in, for example, on the 6th of January with Capitol Hill riots where that kind of content was banned on YouTube, but it popped up on other places. 

So for us, the rules should be harmonized.  The rules should be the same, if possible, for all players.  And I think that is the message that I would like to convey to you.  And on top of that, please remember that we are at IGF, which motto is Internet united.  And let's try to imagine good old days where Internet was truly one, and we could have the same experience no matter where we joined online.  And let's try to avoid splinterness because this is where we are heading.  So we will have different Internet in U.S., in China, in Iran, in European Union.  And I don't really know if that's the right way forward.  So that's why we, for example, are truly happy to see new ways of discussing Internet regulations across the Atlantic, for example.  This year there was Trade and Technology Council launched by the government of U.S. and European policymakers.  And in view of, you know, streamlining the regulatory discussions and harmonizing them, and we are looking forward to see a fruitful outcome out of this, so we'll see.  Thank you. 

>> MODERATOR: Thank you.  Marta, I would like to ask you, so how could we make these regulations more workable for the start‑up community?  What will help? 

>> Martha Pawlak: Not overregulating, I would say, in the first place, and, of course, this is the world we are living in, and we are surrounded by law, and I’m saying this is a legal council.  But it started in Poland in 2015?  Yeah, 2015, we developed a survey amongst start‑ups, trying to answer the question, what makes a start‑up successful and what makes a struggle start‑up struggling.  And this survey every year shows that the biggest barrier around start‑ups are legal barriers.  Changing very quickly environment, the necessity to be compliant with, as we mentioned many times here, is a huge problem for start‑ups. 

As I said many times as we heard from Fredrik, Katarzyna, and Marcin, all of these regulations, especially in the digital world we are discussing today have to be adopted and changed.  But maybe we should listen to more start‑ups regulating a new era and take their positions and situations into consideration because I do not agree with what Marcin said, that our rules should be the same. 

As I said before, our risk exposure and possibilities and resources are different for different kind of companies.  And I don't think that competition law is an answer here.  Like, I don't know.  There are people who are paying higher taxes.  We should maybe treat this as an incentive for start‑ups, the fact that we have different legal rules for them.  It happens already because at least in Poland during last year, we had some not‑so‑bad regulations regarding innovation and innovative ecosystem and start‑ups. 

Anyway, we are trying to answer the question how -- we have to stop being a wolf crying and make digital regulations workable for start‑ups.  I will refer mostly to DSA, I think, because this is something we discuss very often amongst start‑ups and we examine this together with our European partners like, for instance, Allied For Startups Organization.  And I prepared a the list for Santa Claus today, so our answer is continue  to limit platforms' liability for third‑party content posted to their site and the transactions they facilitate and design procedural obligations with known penalties such as notice and a system that could increase trust for business users and consumers, increase their awareness and consistency of rules applied across the EU by extending the country of origin.  This is very important, country of origin principle to the broadest possible range of legal requirements.  We didn't refer to this before, but this is very important from start‑ups. 

And avoid information burdens and then discourage business users because this is really problematic for start‑ups and detailed verifications of each action taken on the platform from start‑ups can be very difficult.  But do not overregulate this in the first place and do it very smartly with the presence of start‑ups on the table. 

>> MODERATOR: Thank you so much.  So now, Katarzyna, if you had a magic wand to make ‑‑ to fix all the things in the digital regulations. 

>> KARARZYNA SZYMIELEWICZ: No, I will not even try because we have three minutes left, and we are talking about a huge range of obligations and rules, and I don't want to make them sound naive or ridiculous, like maybe we ‑‑ okay.  Let me rephrase.  My big worry represented in this debate here is that we usually have so little time and so few stakeholders and so ‑‑ such a huge space to discuss, not just in this room, in many other possible rooms including Brussels, that there is this tendency to focus on one issue like we did today on targeted advertising and make it extreme.  Make it sound extreme rather than go into more sophisticated details like the Ofcom colleague, in her excellent comment, showed that, you know, you can frame innovation, for example, in the context of consumer safety or building trust.  And you can just discuss this for one year, right?  We have the time.  That time usually comes after regulation is in place.  With GDPR, we had many years to do exactly that.  To take these rules, rules, because it's principle‑based regulation.  It's not detailed.  It's not going into solutions, right?  It's just giving rules for business players, and these rules could have been interpreted to the benefit of both, I believe.  Consumer value and ethical business.  Has it been interpreted like this?  Have we here invested any time, really, in a dialogue across consumer organizations, regulators like Ofcom in the UK that is mediating more than punishing, small industry, start‑ups, designers, big tech.  Have we?  This is not happening.  So you're asking me the question about how to fix regulation.  I have no idea how to fix that mess.  Don't blame me for this mess.  The more I listen to this debate today, I'm actually more convinced that that should be ‑‑ it is ‑‑ the ball is on the pitch of businesses here to start real dialogue with consumer organizations, human rights organizations, but also designers and regulators to actually fix the harms that their business model is causing before the next wave regulation comes.  I'm sorry, colleagues, but your lack of presence in this debate caused that inflation.  If you reacted to concerns that were voiced years ago around opt‑ins, around irritating pop‑ups, around harms that have been resurfacing years ago, you wouldn't have that mess with regulation.  So we'd better start talking, not like today with one Civil Society to pay lip service, but honestly talking about solutions that are based on trust and consumer value. 

>> MODERATOR: Thank you.  And now, Fredrik, you have our last minute for your concluding remarks and maybe some ideas how to fix this mess. 

>> Fredrik Erixon: Thank you.  I don't think I have a good solution either.  I think what is important is that we sort of ‑‑ we continue with applying our normal competition policy in the way that we have done for the past two decades.  And it is achieving results.  It's achieving positive results.  I think we, with regulation, we want to encourage more competition, more contestability.  We want more competition between the big platforms.  We want ‑‑ you know, for the sake of argument, Apple to start activities much more in search to compete with Google, for instance.  We want more competition with, say, Apple extending its app store to include markets and transactions that you normally would find on Amazon.  We want more of that type of competition because it's going to be helpful for consumers.  It's going to be helpful for the economy to move on. 

Then we need to have a proper discussion about the social aspects of it and have regulations that are targeting the social aspects of it.  I think it's highly recommended to have much more age verification controls.  I would argue from any other social regulations as well when it comes to platforms simply because I think that would be part of addressing some of the problems that we have seen arising with the platforms.  And finally, I think I would advise businesses, regulators, and others to get together to develop standards for how to, in practice, deal with some of the issues that we are talking about in regulations, for instance.  What type of standard should we apply when it comes to competition on a platform?  If we now think, for instance, that Apple is charging too much for sales in its app stores, so what should the standard be?  What is a reasonable fee to pay for it? 

And I don't think you can come with heavy‑handed regulation to determine that because it's going to differ from platform to platform, and it's going to differ over time depending on how services are changing.  So having a prize command and control type of regulation by government, I don't think that's going to work.  It's not going to achieve the outcome.  And having sort of more space for the development of different standards, I think it could be the way forward.  Thank you. 

>> MODERATOR: Thank you so much.  So we are already two minutes past our time.  So I would like to thank everyone for their amazing contributions to this debate.  Again, Ms. Katarzyna Szymielewicz, President of Panoptykon Foundation, Marcin Krasowski at Google, Marta Pawlak, Startup Poland, and Fredrik Erixon, ECIPE ,thank you all very, very much.