The following are the outputs of the captioning taken during an IGF virtual intervention. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid, but should not be treated as an authoritative record.
>> We all live in a digital world. We all need it to be open and safe. We all want to trust.
>> And to be trusted.
>> We all despise control.
>> And desire freedom.
>> We are all united.
>> LI YUXIAO: All right. Welcome to the Open Forum. Distinguished guests, dear colleagues, good afternoon. And the Secretary‑General of the Cybersecurity Association of China. Welcome to the IGF Open Forum coordinated by the Bureau of International Cooperation of Cyberspace Administration of China and Chinese Academy of Cyberspace Studies.
Your speakers, may we request you to turn on your camera and to say hello to everyone.
>> DAVID ROBERTSON: Hi.
>> LI YUXIAO: Okay. Thank you. Thank you, all. The theme of today's Open Forum is Development and the Rule‑Making on Artificial Intelligence.
AI, a major technology concerning the progress of human society, as impacted the international cyberspace system. In the past COVID‑19 year, AI both diversified application, playing a significant role in breaching the digital divide. The challenges of population aging. And promoting inclusive development.
Meanwhile, issues such as barriers, cybersecurity, privacy protection, data abuse, has transitioned traditional national boundaries. Faced with rules and laws. Countries around the world are highly concerned with the development, security, governance, and the rule making on AI.
The Open Forum bringing together, international organisations, enterprises, associations, and think tanks, across the globe to share their ideas and their insights of the environment.
International comments and the rules on AI, forum, first, is the development of AI and the impacts. The second is international comments and international cooperation on AI.
Now, I hereby declare the forum officially open. I would like to give the floor to Director General of Bureau of International Cooperation of the Cyberspace Administration of China. Please.
>> XU FENG: Dear colleagues, good morning, good evening. On behalf of the International Cooperation Bureau of Cyberspace Administration of China, welcome you all to our Open Forum.
Now, I will deliver my speech in Chinese.
IGF under the framework of the United Nations is an important Internet governance system and platform. China has attached great importance and has actively participated in the events and activities under the framework. We have contributed China's wisdom and solution to this issue. Artificial intelligence is an important representation of this and a decisive and critical technology innovation. It is imposing profound impacts, influences, on the international community. Also, governance of the artificial intelligence has great importance in the whole international society.
Today, it is very timely relevant for us to hold this Open Forum to discussion these cutting‑edge issues. Hereby, in terms of developing the artificial intelligence, I'd like to share with you some of my opinions.
First, we need to accelerate the recovery of economy on the basis of technology innovation. We have witnessed many breakthroughs based on the artificial intelligence and the AI technology has been applied to many more diversified areas.
The AI technology has enabled more new models. And it has been contributed to the industrial class effects, et cetera.
We look forward to working together with the whole international community for this opportunity, the new round of Industrial Revolution, and the enabling environment of innovation. And we hope that through this, we can help the faster recovery of global economy.
And the second, we need to work together to rise up to the challenges and the risks. As well as the development of the AI, we also see some risks like the abuse of technology. Missing of assets. And also, the infringement on the personal privacy. Faced with new issues and new challenges, the international community need to pursue win/win cooperation to follow the new standard of the AI in terms of legal system, the government governance, and the privacy protection, et cetera. So as to ensure that the artificial intelligence is controllable.
Third, we need to deepen the multilateral and multinational cooperation. The AI development is based on the sharing of ideas of the scientists in different countries. In recent years, the international community has used dialogue mechanisms like the AIPAC, G20, et cetera. We have exchanged many good outcomes of these dialogues. And these has been contributing to the development and the governance of AI technology.
We are exploring to build a platform among the government, the academia, the enterprises, the institutes, et cetera.
Fourth, we need to be people‑oriented and improve people's wellbeing. The artificial intelligence is highly relevant to the development of the human being. The backdrop of COVID‑19, artificial intelligence has been contributing to the R&D of new drugs, et cetera. It cannot be ignored that the digital gap between different countries and different peoples is widening. For the sake of a better future, we need to attach greater importance to the inclusiveness. And we need to pay attention to the vulnerable group. So as to promote the inclusive use of artificial intelligence and other new technologies.
Ladies and gentlemen, faced with the pandemic and the great changes. Faced with both opportunities and challenges. The international community need to unite, rather than conflict. We need to abandon the cohort mindset and, instead, we need to pursue a mutual trust and cooperation. And we feed to work together to build a community of shared future, being the cyberspace.
Finally, I wish this forum a great success. Thank you.
>> LI YUXIAO: Thank you for this very important speech. Expressed very important four points for China's government. How to deal with rules‑making on AI. Thank you very much.
Now let's move on the panel speeches. The two topics I mentioned earlier are determined based on the opinions from the number of experts and organisations. In the following, each panel speaker gave a speech of up to five minutes. The first topic is cutting‑edge developments of AI and its impacts. Now, I will give the floor to Professor Gong Ke, President of the World Federation of Engineering Organisations and President of the Chinese Institute of New Generation Artificial Intelligence Development Strategies. Professor Gong, please.
>> Sorry, it's ‑‑
>> LI YUXIAO: Okay. Professor. Please.
>> GONG KE: I think you're going to play the prerecorded presentation, am I right?
>> LI YUXIAO: Yeah. Okay.
(Speaking non‑English language)
>> LI YUXIAO: So Professor Gong will give us a speech.
>> Professor Gong will need to communicate with the UN because they didn't give us right of host. Hello. Are there any UN staff there? Hello. Could you please give the right of sharing the screen with Professor Gong? Hello, host.
>> GONG KE: Good day, ladies and gentlemen. As organizers asked me to talk about the progress of AI technology. Please allow me just to use some examples, a snapshot to show the progress.
So let me share my screen. So from the technical perspective, there's notable progress in DNN with large‑scale pre‑training model. Reported last year by OpenAi, a GPT3 model with 175 billion parameters, 45Tb of data used for its training and more than $12 million U.S. dollars of training cost has been released.
Later on, a number of large‑scale pre‑training model has been released by different organisations, corporates. Here, you show one released by the Chinese Institute called WuDao. On this video, you see a very good combination of the sign language, the text, and the language. That's a kind of multimodal processing of language supported by these large pre‑training model. WuDao.
Another noteworthy progress is in the brain‑inspired computing. Here, we see a model reported by MIT and Technical University of Vienna. Which used 19 neurons. Brain neurons, which needs millions of these kinds of neurons in traditional DNA network. This light model has been successfully used in the control, autopilot, and reported by the "Nature" magazine last year.
And another progress in brain‑inspired computing is the theory of completeness of the brain‑inspired computing which is reported by a team in "Nature" in October last year. This theory shows that the software and hardware could be discoupled. And that could significantly expand the application scope of the brain‑inspired computing. And this is commented by "Nature Weekly" as a breakthrough scheme in the development of brain‑inspired computing.
Here, you'll see is another very important progress that shows you the merge of DNN and brain‑inspired network on chip reported by "Nature" for the dual‑control chips.
Another notable progress is privacy computing. We know that typical privacy computing, is learning. The new progress is so‑called swarm learning. The swarm learning also use distributed data. But instead of centralized computation, swarm learning use distributed computation with distributed data. That could provide a better protection of the data privacy.
So from the governance perspective, we see a report released early this year jointly by UN DESA, UNESCO, and the Office of Secretary‑Generals of technology and World Federation of Engineering Organisations. This book shows the ethical principles, technical standards, national strategies, and so on, proposed by international organisations, governments, Civil Societies, and corporates. We see the requirements of transparency, accountability, auditability, respect for human rights, data protection, fairness and technical robustness. As repeatedly mentioned in these proposals.
And the WHO has released its first report with six principles ensuring the AI for public good. The first one is to protecting the human economy.
The most recent progress of AI governance is the adoption of the recommendation for AI ethics. It is the first ever United Nations document provides a global normative framework for the AI governance. And it is required by UNESCO to its member states to report regularly every four years on their progress and on their progress and practices.
So in summary, although AI has made important new progress in technology and governance, it is still not yet made breakthrough in the explainability of its algorithm. That should become the key directions of future AI research in order to lay a foundation for trustworthy AI.
In terms of governance, we think the United Nations should better play its irreplaceable role and build an open governance platform with multistakeholders based on the principles of the charters United Nations and oriented to the Sustainable Development Goals.
I stop here. Thank you very much.
>> LI YUXIAO: Thank you, Professor Gong. It's very interesting you talk about the brain, computing, and privacy computing. Also, you analyze about the different way and different kind of stakeholders. It's very interesting about the governance. Thank you very much.
So now here is David Robertson, Chair of Applied Logic. Vice Principal and Head of College of Science & Engineering at University of Edinburgh. Robertson, please.
>> DAVID ROBERTSON: Thanks for inviting me. I'll just try to share my screen now, if I can. There we go. Now, I hope everybody can see that. So in my five minutes, I think this fits in quite nicely. We didn't rehearse, but I think it fits in quite nicely with Gong Ke's presentation. I want to talk a little bit about why AI has grown so quickly. And why it has so much impact now.
So I've been in AI for quite a while. I came in to artificial intelligence in Edinburgh in the mid 1980s. Which at that point, it was a big technology wave in artificial intelligence. But that particular wave is nothing like the scale of the wave that we're seeing currently. So I want to say a little bit about why that's true. And I believe it's true because of fundamental differences in the way that research and technology interacts with our society currently.
So this slide kind of makes the point that AI is not only machine learning. In fact, a lot of people equate AI currently with machine learning because it's so dominant. But, in fact, it's a collection of many subdisciplines. Each of these is reaching a new level of security. Stimulated by advances in computation, more generally. So sensors are no longer passive. They're becoming active agents on the edges of computer networks. Robotics architects are robust enough to build human systems in an incremental way. Language, to raise performance to impressive levels. All of that means that human factors researchers are then able to cross what used to call the uncanny valley from clearly synthetic to artificial agents. That's fueled by the ability to make machine learning more of a commodity technology. Easily shared and applied in new domains. The sort of thing Gong Ke was talking about.
Meanwhile, symbolic knowledge representation hasn't gone away. It's now managed more rigorously at large scale through things like knowledge graphs and other structured methods.
And all of this is becoming accepted as part of mainstream systems engineering, importantly, which means that AI algorithms can be built into the heart at new system architectures rather than added on as an afterthought which is what we used to do.
The resulting explosion in numbers of semiautonomous systems has created a new area of AI focused on coordination of social systems of agents. Gong Ke gave an example of one of those. That's created a new potential for collaborative AI. At the same time, it creates new security threats from adversarial AI systems.
Again, that backdrop of evolution of subdisciplines, many of the breakthroughs in application are coming not from individual subdisciplines but from combinations of these. That's driving a confluence of theory into application areas. Essentially, resurgence over original AI but in a much more effective and targeted style.
These confluences are now generating a different landscape in application. Let me ‑‑ yeah. So, for example, autonomous vehicles, realistic synthetic agents, synergistic digital twins of complex real‑world systems. Autonomous robotics systems with limited self and situational awareness. These emergent areas are shaped by both the AI confluence and the target application area.
So that's what creating a self‑reinforcing cycle from AI research in theory, from the confluences forced by emerging applications and returning back to theory by discovering new synergies across subdomains.
The application demand forces increasing speed of translation from research to application. While the real or perceived success of applications creates a stronger push on research institutions to contribute. So we get a self‑reinforcing cycle. And the energy in the system seems at least for the foreseeable future to be sufficient to maintain the current acceleration, which means, in my view, that governance will require renewed effort to keep pace as the cycle wraps around the development of new systems.
So thanks for listening. And I'll stop sharing.
>> LI YUXIAO: Thank you very much, Professor Robertson, for an insightful speech. We're happy to meet again. After this September meeting. And I hope you're well.
>> DAVID ROBERTSON: Thanks.
>> LI YUXIAO: Now let's welcome Ms. Zhang Hui, head of Counter AI Governance Research to give her speech.
>> ZHANG HUI: Distinguished guests, ladies and gentlemen, thanks for inviting MEGVII to today's forum. It's also my privilege to join the discussion of AI development and its impact. I think I will give my presentation below in Chinese.
Ladies and gentlemen, as a member of the AI industry, MEGVII is just like any other technology. AI is going through a phase where we are seeking a truth theoretically and we're going through with implementation with a pragmatic approach. Those two are complementary. So today, I would like to share three points.
First, AI is going through a deeper integration. On one hand, AI technology is being integrated with other technologies in a more deeper level. The most typical is integration between AI and IOT. IOT has series of new model, new scenarios, as well as technology capabilities. With the development of AI OT, everything is connected and moving toward a more intelligent space. And the physical space is being renovated. And we have a digital twin enforced. And those two are enhancing each other and having this interactive communication.
We also know that there are series of solid economy being incorporated with AI. And we have a different scenarios being applied. So we are using AI to empower our economy, creating this value. MEGVII has been established for more than ten years. And in the field like computer vision, we are building up our own capability. And we have three major verticals to leverage AI OT solutions. Combining software and hardware to push forward the application of AI.
Second, people‑centered value of AI is becoming more obvious. The core of AI is people. We need to be people‑centered and provide that service to people. And I believe knowledge and principle will be applied realistic. For example, in the city, MEGVII promote the ideology that the space of city is services. So we would inject the capability of AI into the production, life, and the space so we can build a more intelligent city.
And on the industry side, the value is being presented in the enhancement of capabilities through AI. For example, in the intelligent logistics, we can use a dispatching system of the robotics system to help one enterprise to increase three times their usage of the inventory space. And the selection staff will have their steps discounted from 50,000 per day. And the average daily production was increased by five times. I wanted to also talk about AI as a capability but also responsibility. We need to develop responsible AI as well as sustainable one. We need to promote the innovation of technology of industry. We also need to put focus on both development and supervision. And in the recent years, China came out with a series of regulations. For example, Act on the Protection of Personal Information. Different institutions are taking participation in this. The industries are also looking within to develop the self‑discipline. We know that this multipronged model is being improved day by day.
From an enterprise perspective, we are promoting this process of seeking the truth and being realistic and pragmatic approach. We believe our value can be furthered. So I think in this process, MEGVII are persevering with different scenarios of AI leveraging, including establishing regulation in the industry and to combining the collaboration of academic institutions, as well as manufacturing industries, et cetera. So in the future we will continue to implement and step up our innovation. And we really look forward to this collaboration with our friends from different industries from all over the world. And to promote the development of AI. Thank you.
>> LI YUXIAO: Up next is Mr. Wang Bo, Assistant to the Chairman and Vice President of the Beijing/Tianjin/Hebei regions.
>> WANG BO: Hello, everyone, my name is Wang Bo. And I work in iFLYTEK. IFLYTEK is an AI company. We focus on speech recognition. Machine translation. And natural language process.
Next, I will use Chinese to introduce the progress of AI application industry.
In post‑pandemic era, iFLYTEK and other firms in our industry, artificial intelligence has achieved rapid development in source technological innovation, industrial safety, the epidemic control and prevention, the resumption of production, et cetera.
In the past 24 months, the iFLYTEK has been shifted from a single‑point technology breakthrough to machine cognition, multimodal application, and complicated scenario application. Multiple dialogue innovation, diagnosis, the overall machine translation targeted identification and multilingual competition. We have six champions in November this year in the open ASR competition in the 16 competition lanes. The iFLYTEK has won the first in the whole world. The iFLYTEK now has open 442 capacities. Gathered 2.71 million developers. Developed 1.3 million apps. And connected 3.65 partners in the ecosystem. We will work together with other leaders in the industry to provide the capacity service on the basis of low code.
This platform must be multimodal sensing. The blood pressure, heart beat, pulse, et cetera. And also, it can sense the people through sound, signs, et cetera.
And to enable the better learning of the Chinese language, we have built a learning platform covering 179 countries and regions. And we have got 5 million registered users. We say that the artificial intelligence must be used in the tangible scenes, sceneries. These sceneries can be promoted to a large scale. And also statistics can prove the effects of such application.
Through trainings and small data learning algorithm breakthroughs, we can reduce the application of artificial intelligence cost in many areas. And artificial intelligence has never been as tangible as today. Especially in the five‑year plan period. It is a very critical window for solving the important social issues through artificial intelligence.
The population aging is still undergoing. And we are focusing on that. Actually, in Tianjin, we have developed some usage case. For example, the artificial intelligence can collect the data of using natural gas, tap water, et cetera, to sense the life situation of the elder people. And in terms of smart education, we can analyze the learning situation of kids. Also, on the basis of special features of each kid, we can enable targeted guidance and instruction of these children. Every kid may have different exercises. And this system has covered over 1 million teachers and students in 14,000 schools from 32 provinces.
And also in terms of smart medicine, we follow the same logic. This is to help the frontline doctors in villages to have better diagnosis. We have covered more than 200 Level 3 hospitals in 26 provinces and municipalities. And also we have established an automatic system for serious diseases and infectious diseases. And also to see products like the translation machines, et cetera, has been skyrocketing in many cities and provinces.
And the language recognition has been expanded to the industry. This can be used in the detection of faults in machines.
In the following five to ten years, iFLYTEK hopes to build a tower for the human being to cross over the language barriers so that we can better build the community of shared future for mankind. Our idea is to work together with all the developers to achieve the ultimate dream of the artificial intelligence. That is to let everyone stand on the shoulder of the artificial intelligence and to usher in an even greater brand‑new era.
We look forward to taking the platform of IGF to promote the development of the community of shared future for mankind. Thank you.
>> LI YUXIAO: I give the floor to Mr. Ma Yanjun, Senior Director of the Deep Learning Platform. Please.
>> MA YANJUN: Distinguished guests, I'm an AI scientist at Baidu. It's a pleasure to be invited here. My own background is natural language processing. Actually, I had a Ph.D. at Dublin, Ireland. So it's a great pleasure to be here today.
Well, my topic today, I'm going to talk about several trends I observed over the years. That is, the fusion of technology, AI technology, lowering the barrier of AI adoption and also open source.
So in the following, I think I'm going to speak in Chinese in the following. Well, I have several slides as well in Chinese, though. Okay.
Currently, we are in the new round of Scientific and Industrial Revolution. And the whole society is undergoing a profound changes. From some statistics, we can show that in 2020, the digital economy has reached 39.2 trillion R&D and accounted for 38.6% of the total GDP of China.
The process of the robust development of artificial intelligence, the digitalization of industries, is actually a new phase of the development of digital economy. And it has been penetrated to all the links in the economic activities.
And as we can see, the share of the non‑Internet IT industry has increased to 53.4% in 2018 to 67.9% in 2020. Also, we have seen that the combination of artificial intelligence and industry has witnessed more and more professional cases and scenarios.
In Baidu, we have an AICA training programme. In this slide, actually, this slide shows the trainees' selection of their subjects. Their research subjects. As we can see, the selection of our trainees, research subjects, has been more and more combined with their own industries.
As I have mentioned, we have three features. The first is technology fusion and innovation. This has been a more and more obvious trend. The knowledge and deep learning is being integrated. As Professor Gong has mentioned, the large‑scale training model represented by GPT3 has brought many breakthroughs for the artificial intelligence. It has very strong generality and ability of movement and transfer.
And in Baidu, on the basis of knowledge graph or mapping, we have combined this technology with the neural network. And we have achieved the best effects in hundreds of models. Actually, this is a very later trend in this process.
And also the deep learning framework has been integrated with smart chips. So as the deep learning technology has gone deeper, chips and the framework need to be combined. Considering the power consumption, latencies, et cetera. For example, our open‑source platform of deep learning has been combined with more than 30 chips in the whole world. And this is my first point.
My second point is the lowered threshold. Actually, the application of artificial intelligence has been wider and wider. Just as I mentioned, this combination with chips. And as this application has gone wider and wider, we need more low‑threshold open tools. For example, for those people who don't know how to write some code, they can just use a visualized interface without operating code. And for other developers who have the AI technology background, they can have a self‑researching model, which means that the AI platform is needed to output different levels of capacity. And in this process, lowering the threshold has become more and more important.
Thirdly, the artificial intelligence development has been developed because of the open source. The open source has actually become a very obvious trend. The open source has become an important model and core engine of technology innovation and industrial development. We have been talking about the open source of the source code, including the data technology platform. All of this can be open source. And this can support the high‑speed development and industrial application of artificial intelligence.
As I have mentioned, the Deep Learning Platform of Baidu is totally open source. It now has covered more than 3.7 million developers. That is to say, as far as I'm concerned, open source will become a very important engine for promoting the development of artificial intelligence. As mentioned, technology fusion, lowering threshold, and open source, we hope that we can work together with other partners and friends to explore the governance of the artificial intelligence. Thank you.
>> LI YUXIAO: Next‑generation AI is injecting new vitality to economic and social development across the global. Smart technology ranged from 5G and big data to smart mobility and the smart city. Reshaping social development and people's lives at an unprecedented pace and scale.
In the fight against the COVID‑19 pandemic, AI has played an important role in virus detection. Face recognition. And the daily moderating. Has a profound impact on economic development, social governance, and the people's livelihoods across the world. With an extensive application of AI, however, comes the concern about how to create a balance between its benefit and the potential risks. Therefore, it is urgent to develop a code into the topic of AI comments and cooperation on a global scale.
Now, let's move on to the second topic. International governance of AI and the international cooperation. Let me give the floor to Professor Xue Lan. Dean of Schwarzman College and AI International Governance of Tsinghua University. Professor, please.
>> XUE LAN: Thank you very much for this opportunity to speak at this forum. Let me share my screen. All right. Can you all see the screen?
>> XUE LAN: Okay. Given the limited time, the previous speakers, I will zoom on to the global governance of AI. The last few years, China has really done a lot of work on the domestic, you know, AI governance issues. I will not belabor yaw on those issues. I will directly focus on the global governance of AI.
First of all is why. I think there are, indeed, some people say I think China has been working hard on developing the technology and the application. So why don't we care? Indeed, I do think we have to care.
First of all, I think that, you know, I think global governance can facilitate the production for common goals. I think we've already heard that the previous speakers have already talked about how AI actually can produce, promote, human wellbeing.
Cooperation and coordination among the international community will be really needed. Many cross sectors, cross‑border issues. For example, I think, you know, colleagues have talked about open source. About data flow. About cybersecurity. And, of course, not to mention about the scientific collaborations among different countries. So all of those I think means that we in order to maximize the advantages of AI, we do need the international collaboration.
And the second thing, common concerns. I think AI could have devastating impact on human society. And so, you know, for those issues, it's unlikely to be addressed by single country. So I think, you know, we need to have, you know, some proper global governance regime. In order to avoid kind of a, you know, race‑to‑the‑bottom competition that might lead to a new arms race in AI. So I think those, you know, issues need to be addressed at the global level.
And the third is we need to reconcile differences among different countries. I think we, you know, different countries have very different cultures and different stage of development. And so on. And so with any distinctive institution and strategies.
So I think we need to work together to build, you know, some global governance system so that we can reach a consensus and, you know, hopefully we can promote proactive actions at a global level. There are a few simple reasons.
The second, I think in terms of China's position on international AI governance, just a very quick summary of my personal review. I think China has been trying to work with other countries and the international community to really promote AI for good. And also, to develop AI+, meaning application of AI in regions, and also to study social impact. Also, China has been trying to maintain open development to promote international collaboration and oppose decoupling.
China is working hard to support inclusive rule‑making and support UN‑based discussion, debate, on governance principles and prevent dominance of a few. And also, to address the issue. And also, we want to respect that different countries have different laws and regulations. And also try to leave space for AI's rapid development.
In particular, I think in the development of AI governance, we recognize the role of business sector that can play in this, you know, in governance of AI.
Third, what's the next step for international AI governance? I think the first thing that we need to have global platforms to coordinate AI governance issues. I think the recent publication of a UNESCO document, principles, are really an excellent, you know, steps in achieving that.
The second is that we really need to learn lessons from other global governance on important issues. For example, in terms of Internet governance. Nuclear governance. On space law and climate change. So I think we can learn from all of those, you know, major global governance issues, you know, items that we can actually ‑‑ that can help us to develop proper AI governance regime.
And the third is to strengthen scientific collaboration in AI research. Which actually is already prospering I think, you know, in many areas. Also, we need to work together on issues related to AI governance and also to study the social impact of AI.
And fourth is to seek common values. While respect differences. Considering the social, economic, political, and cultural differences among countries.
Finally, to develop common principles and norms to guide the healthy development and deployment of AI.
I think that's what I see from my personal view the ways that we need to move forward to help to promote international AI governance collaboration. So let me stop here. Thank you.
>> LI YUXIAO: Thank you, Professor Xue. We know you provide greater contribution of AI's governance and rules‑making in China. Thank you for your great speech.
Now the following is Thorsten Jelinek. Founder and Managing Director, EPG Digital Platform Governance, will share his insight. Please.
>> THORSTEN JELINEK: Thank you. Thank you very much. Ladies and gentlemen, dear friends. I can see some friends. I'm delighted to attend this important meeting.
Let me read this to you, please. The crisis in multilateralism comes along with the return of sovereignty. The problem is not sovereignty, of course, but the fine brutal line that separates sovereignty from protectionism, fragmentation. The return of today's sovereignty risks, undoing the achievements of the Internet Revolution, including the rise of global innovation and value networks that are so important to tackle today's global issues. And some of them we heard today already.
The focus on digital sovereignty is largely a response to the disproportionately negative impact of digitalization on communications, society, economy, government, and even foreign relations. It is supposed to provide an environment that safeguards our participation in the digital world, improves its trustworthiness, for the digital future.
Digital sovereignty is an opportunity not to be trapped in a straitjacket again. This time not of hyper-globalization but of hyper-connectivity. Not to be trapped in a straitjacket means the ability to regulate national affairs more independently and better cushion the disruptions of digitalization.
Here we can see some level of convergence across the major economies in terms of privacy protections, cybersecurity, and fair digital competition. What Professor Xue has also highlighted.
The security, the scrutiny, of big tech recently is a clear expression of that turn toward, or return toward more digital sovereignty. Governance is always striking a balance between autonomy and restriction. Let's take the European Commission's AI Act proposal as an example. Industry has raised the concern that this act would stifle innovation and competitiveness. For Civil Society Groups, it can’t go far enough in terms of regulating or prohibiting risky AI applications. Such an approach has received strong support as we already heard by the nonbinding UNESCO recommendation on the ethics of AI.
And, yet, there is another twist. The European Union has made digital sovereignty a key pillar of its digital ‑‑ of its political agenda. And wants to become the world's safest digital environment and market and a global digital norms builder, which is quite laudable. However, the United States criticizes Europe's digital sovereignty strategy as protectionist. But to be fair, the United States has dominated not only the Internet since its rise but before the era of regulated telecommunication monopolies for the most part of the 20th century.
Now with the rise of China, we have already seen a rebalancing in terms of technology leadership and institutional domination. Those developments have also strongly added to the return on territorial and digital sovereignty.
I'll focus on some more similarities and differences between the Internet and AI in relations to governance. First lay, AI, like the Internet was before, is an engine of change. The driving force of the 21st century's transformation. The Internet with such force of creative destruction, but the Internet did not promote or adhere to any specific ideology. Both Internet technology and AI do not carry any intrinsic ethics. Except the underlying scientific drive is one of relentless objectification and rationalization.
In the past, the rapid global adaptation of the Internet technology was largely a result of capitalist expansion and you might call it benevolent homogeneity at least initially. With AI, we won't witness such benevolent homogeneity. It's become a rise of competition, dominance and control. For that matter, we're discussing how to better govern it, obviously. How to govern it.
Secondly, how do we fear AI? Because objectification will no longer be the sole faculty of the human brain. Subjectivity provides a sense of autonomy and freedom. The question is how much of it will remain in the age of automated objectification?
Furthermore, intelligent automation will not only disrupt labor markets but also entire development models which have relied on the absorption of labor. Many workers might not be able to re-skill from jobs of execution to exploration, from repetition to dexterity and creativity, from nonsocial to social interaction and empathy. Those risks don't just pose another technology trap. Because as our modern world relies on technology and markets elevating the forcefulness of technology, AI will continue facing an insurmountable governance gap. It's this void that makes the disruptive character of AI not a simple choice between doing good and doing harm. Thus, without intervention, the current trajectory of history where many determine the way we use AI. In other words, an overarching principle of governance and cooperation. We do need human‑centric AI. We do need AI for sustainability or sustainable AI. But we also need responsible global competition and a mechanism for avoiding or reducing conflict and rivalry.
However, given ‑‑ to conclude, however, given that different cultural values and political systems, different histories and stages of development, will continue causing divergent views. A more flexible approach is needed. The reform of the global political system must be open to ambiguity. Must embrace special and differential treatment of countries. And attempt to solve problems with more flexible and case‑by‑case approach.
We need an international governance and cooperation and involve all major and rising powers to counterbalance that strong demand for territorial and digital sovereignty.
I believe that only then our efforts will have a chance to longer to be at risk of being primarily a conduit of fierce competition, and national security. Thank you very much.
>> LI YUXIAO: Thank you. Up next is Professor Ayad Al‑Ani, Associate Member at Einstein Center Digital Future.
>> AYAD AL‑ANI: Yes. Thank you very much. Good morning from my side. I would like to talk to you in the next couple minutes about how to achieve global ethical standards and the development of automation.
After listening to all of you, it's really not difficult to assume that in the future must of our economic activities will be guided by some kind of artificial intelligence. That will in the end be responsible for decisions that shape our lives. An Austrian scientist assumed by the year 2050 factories will be run by some sort of robotic authority. This will not be guided by personal gain. That's interesting. Income or profit. But its main motivation will be the survival and wellbeing of the factory and the community that is housing the factory and extracting taxes from it.
The dynamics of capitalism will be, therefore, replaced by the dynamics of reproduction. Companies that squander resources by paying its owners will be pushed out of the market. So goes the scenario.
The interaction between fully automated entities will also need to be guided by some sort of ethical standards reflecting societal paradigms, legal and economic rules and assumptions, for instance.
In a nutshell, this leading intelligence would be designed in such a way as to enjoy servicing humans.
As we're heading toward a multi-civilization, means there will be some kind of alliance of countries of the west and the Global South dominated by China, we need to be concerned with the questions of whether machines and artificial intelligence will also be part of these respective. Or if they will be as antagonistic. If machines will be recruited in a not-so-subtle struggle, the conflict will become dangerously multiplied and automated. Is there another way?
Thinkers of the feature have sometimes assumed there will be one rule that is guiding machines. Of course, I'm thinking here of ISOC as the most famous three laws of robotics. The problem is it was never described how those laws became enshrined in the production of machines. Also described a past conflict between machines and humans. And, thereafter, machines are forbidden on Earth. A scenario that humanity finally unites to struggle against machines it can't control any longer.
Translated into our situation, with difficulties, this would mean we would already have one global civilization trying to regulate and develop the machines in a certain way. Ensures future editions of human laws are translated into design guidelines. For instance, robots should never harm humans, et cetera, et cetera.
The question then is if the current international organisations are in the position to impose such design guidelines or ethical standards, and also if societies are in control of the private companies that are designing this intelligence. I think the answer is, unfortunately, no. It's not only the case that there's no global institution and charge powerful enough to enforce such ruling. There is the question of whether the political forces are still in full control of its intelligence production entities.
At this point, maybe a global virtual room, a global platform like the anticipated metaverse project or other IOT platforms could be an option. Although, the metaverse project, for instance, contains lots of issues and difficulties for humanities, for instance, it's the difficulty to differentiate between the reality and the imaginative. Creating a single civilization or room. Although this being a virtual copy, might be a solution, if all could participate in the decentralized development of such a platform.
Machines must adhere to some kind of governance and rules and if conflicts arise, they're still in the virtual copy and could be contained and fixed before spilling over to the real world. There would then be enough time, would be enough room for discussion and solution finding. This must not happen but there's a chance that this could happen. Thank you very much.
>> LI YUXIAO: Thank you very much, Professor Ayad Al‑Ani. Up next is Professor Luca Belli. Please.
>> LUCA BELLI: Good morning, everyone. Thank you so much for having invited me to be part of this IGF Open Forum on developments and rulemaking on artificial intelligence. I would like to salute also the Chinese Academy of Cyberspace Studies for having organized this timely conversation. My name is Luca Belli. I would like to briefly present a little bit of the rough initiatives that are happening in Brasil in this very moment. As you all know, and I'm sure you have been discussing over the past minutes, the states at various regional level and also intergovernmental organisations are starting and are discussing quite intensely, I would say, some AI regulatory frameworks. Regulatory AI systems that are both utilized in the public and private sector. There are a lot of questions that are still unanswered. For instance, whether self‑regulatory approach, a core regulatory approach, or a legal approach, are the best suited to regulate AI. Or what kind of governance framework should be utilized. Which kind of regulatory authority should have competence around it to oversee the implementation of AI regulation. Specific dedicated AI regulator. Existing sectoral regulator. There are a lot of questions that are yet not answered.
There are already some global initiatives to try to frame AI. Recent UNESCO guidelines on AI ethics. Or the OECD recommendation on AI of 2019. And the current Brasilian bill, 2020, is largely inspired from the from the OECD recommendation. Brasil has signed into this recommendation. And the very Rapporteur of the AI bill has referred to the AI recommendation of the OECD. The aim of this recommendation is to bring a safe environment for the use of AI in transparency, ethics, respect to fundamental rights. The bill stresses the need to also apply regulation and synergy with existing regulation. Chiefly, the recently approved Brasilian Data Protection Law. The GDPR, Brasilian characteristics. And the bill, AI bill, consists of 16 articles. It's a very concise bill. It is largely inspired by the OECD recommendations. And there is no specific limitation on types of AI that should not be implemented. That should be prohibited. There is no such prohibition. As we may see, for instance, in the proposals that are emerging at the European level.
Text highlights that measures will be implemented according to the maturation and evolution of technology with a more risk‑based approach but implemented on a sectoral basis.
And it is quite clear why there is such a light touch that is proposed by the bill. Because it facilitates enormously its approval in Congress. Which is a potential to try to facilitate the approval of the law. But it also creates some problems.
The main problem in this very light approach is that it collect specific norms. Specific guidance. And leaves the door open.
>> Because the time limit, we'll move on to the concluding remarks session.
>> LI YUXIAO: Okay. Vice President of Chinese Academy of Cyberspace Studies to give closing remarks. Please.
>> XUAN XINGZHANG: Ladies and gentlemen, friends, we used one hour to talk about the development and rulemaking of artificial intelligence. Today, this year, is the sixth consecutive year for the Cybersecurity Association or China to hold this forum. We have seen representatives from different institutions and from the academia to gather together to share our opinions and ideas. The artificial intelligence is influencing people's lifestyle and the way of production. And it has also proposed new tasks to the government in terms of legislation, governance, et cetera.
I would like to propose the ‑‑ some of my points. First, we need to shoulder the responsibility of the era together. Artificial intelligence means new opportunity.
However, the uncertainty of the artificial intelligence have brought some challenges to the governance. The think tanks need to pursue proposed wellbeing, enhance the judgment of the risks so that ‑‑ so as to enable the AI to better serve people's life.
And second, we need to be a good bridge between different parties. Promoting a good governance of AI is the task of the international community. We need to make innovations and method of governance. In terms of ethics, et cetera. We need to contribute our wisdom to the governance system.
And third, we need to abandon the prejudice. Some people regard the AI as some evil technology. We need to shoulder our responsibilities to converge consensus from different parties. And solve the misunderstandings.
And, fourth, AI is an open technology. Its development needs the cooperation of different participants and different parties.
The Chinese Academy of Cyberspace Studies has launched the annual reports for five consecutive years. We work together with other partners to contribute our wisdom to the development of AI. We hope to enable the AI to serve the human being. This is the tasks of the think tank. We need to work better to improve people's wellbeing and to promote social progress through artificial intelligence.
Thank you. Thank you for your participation.
>> LI YUXIAO: Thank you, all, again, for your insights and contributions to this forum. I sincerely thank you and hope that our party will continue in exchanges, enhance trust, and the making ‑‑ efforts for the development of cyberspace and a bright future for all. Thank you, again, for your participation. Let's end for this forum. Thank you. Thank you.
>> To contribute our wisdom to the development of AI. We hope to enable AI to serve the human being. This was the task of the think tank. We need to work better to improve progress through artificial intelligence. Thank you for your participation.
>> LI YUXIAO: Thank you, all, again, for your insights and contributions to this forum. I sincerely thank you and hope that our party will continue in deep changes. Enhance trust. And the making. And remaining efforts for the development of cyberspace and the bright future for all. Thank you again for your participation.
Let's end for this forum. Thank you. Thank you.