IGF 2023 WS #33 Ethical principles for the use of AI in cybersecurity

Wednesday, 11th October, 2023 (06:15 UTC) - Wednesday, 11th October, 2023 (07:45 UTC)
WS 1 – Annex Hall 1

Cybersecurity, Cybercrime & Online Safety
Cyberattacks, Cyberconflicts and International Security
New Technologies and Risks to Online Security

Organizer 1: Jochen Michels, 🔒Kaspersky
Organizer 2: Genie Gan, 🔒
Organizer 3: Dennis-Kenji Kipker, University of Bremen
Organizer 4: Gladys O. Yiadom, Kaspersky

Speaker 1: Noushin Shabab, Private Sector, Asia-Pacific Group
Speaker 2: Amal El Fallah Seghrouchni, Civil Society, African Group
Speaker 3: Dennis-Kenji Kipker, Technical Community, Western European and Others Group (WEOG)
Speaker 4: Anastasiya Kazakova, Civil Society, Eastern European Group


Genie Gan, Private Sector, Asia-Pacific Group

Online Moderator

Jochen Michels, Private Sector, Western European and Others Group (WEOG)


Gladys O. Yiadom, Private Sector, Western European and Others Group (WEOG)


Panel - 90 Min

Policy Question(s)

A. What are the key ethical principles that should be considered when using AI in cybersecurity? B. What concrete measures must be taken by different stakeholders to implement the ethical principles in practice and make them verifiable? C. How can a permanent multistakeholder dialogue and exchange on this be stimulated?

What will participants gain from attending this session? Attendees will receive input on what ethical considerations should be considered when using AI in cybersecurity and will be able to share ideas on this with the panelists and other attendees. The ideas will be discussed, new suggestions will be made, and the proposals will be further developed. The goal is to develop a basis that can serve as a guideline for industry, research, academia, politics and civil society in developing individual ethical principles.

Please find a document with ethical principles developed by Kaspersky that will be presented and discussed during the session here: https://box.kaspersky.com/f/f6a112eef3bd4a2ba736/?dl=1


We are currently witnessing a swift development of artificial intelligence (AI) which has the potential to bring many benefits to the world, including the strengthening of cybersecurity. AI algorithms help with rapid identification and response to security threats and automate and enhance the accuracy of threat detection. While numerous general ethical principles for AI have already been developed (e.g. in 2021, the UNESCO adopted Recommendations on the Ethics of AI), there is currently no specific set of ethical principles for the development, distribution, and use of AI components in cybersecurity. Due to the particular opportunities but also risks of AI in cybersecurity, there is a need for a broad dialogue on such specific ethical principles. For this reason, Kaspersky has developed initial ideas on which aspects should be taken into account here. These will be discussed and further developed in the workshop. Some key ethical principles could be as follows: • The human must remain in control: While AI systems are designed to operate in a self-contained and autonomous mode, human control remains as an important element of their implementation. • Developed and used for cybersecurity: The one and only goal of AI systems developed for digital security is to provide users with the best cybersecurity solutions and services that cannot be used to negatively impact any system. • Safety comes first: While designing and developing AI systems for cybersecurity, it has to be ensured that their operation does not negatively affect users or their infrastructure. • Be transparent: Openness and readiness for dialogue with users and stakeholders as well as clearness in the model of operations of algorithms should be key goals. • Maintain privacy: Training data play a vital role in the implementation of AI systems. Processing such data must be based on respecting and protecting people's privacy.

Expected Outcomes

After the session, an impulse paper on “Ethical principles for the use of AI in cybersecurity“ will be published. It will reflect the discussion results and will be made available to the IGF community. In addition, the paper can be sent to other stakeholders to gather complementary feedback. 

Hybrid Format: The moderators will actively involve the participants in the discussion, for example through short online surveys at the beginning, after the initial statements and at the end of the session. The survey tool can be used both by onsite participants and by online participants. This will generate additional personal involvement and increase interest in the hybrid session. During the „Roundtable“-Part, active participation is possible for both onsite and online participants, as all participants should actively contribute their ideas. Both onsite and online participants will have the same opportunities to participate. Planned structure of the workshop: • Introduction by the moderator • Survey with 2 questions • Presentation of the draft principles by Kaspersky speaker • Brief impulse statements by other speakers with their view on the principles • Survey with 2 questions • Moderated discussion with the attendees onsite and online – Roundtable • Survey with two questions • Wrap-up

Key Takeaways (* deadline 2 hours after session)

1) The use of AI/ML in cybersecurity can make important contributions to strengthening cybersecurity and resilience. However, its use must be responsible and sustainable. In this context, ethical principles are an important guideline that helps users of cybersecurity solutions to understand, assess and consider the use of the components for their own usage.

2) Ethical principles presented and discussed in the workshop should be further developed. Human control, transparency, safety, and privacy are of utmost importance. Kindly find the document with the ethical principles developed by Kaspersky here: https://box.kaspersky.com/f/f6a112eef3bd4a2ba736/?dl=1

Call to Action (* deadline 2 hours after session)

1) An international multi-stakeholder discussion on ethical principles for the use of AI in cybersecurity is needed. Perhaps the IGF can take that topic into acount in the future work of the PNAI.

2) In addition to ethical principles, a risk-based regulation on AI and international governance standards are needed.

Session Report (* deadline 26 October) - click on the ? symbol for instructions

The rapid development of artificial intelligence has been translated into many benefits in cybersecurity, enhancing the overall level of cybersecurity. Detection and response to cybersecurity threats have been more efficient with the use of artificial intelligence (AI)/ machine learning (ML) systems. While opportunities brought by AI cannot be refuted, it can be diverted for malicious purposes. In this context, Kaspersky deemed crucial to open a dialogue on ethical principles of AI in cybersecurity. As UNESCO has issued recommendations on the ethics of AI, the growing use of AI/ML makes urgent the need for ethical principles of AI systems in cybersecurity.

The session started with two polls: Online participants were asked 1/ whether they believed AI would reinforce or weaken cybersecurity, and 2/ whether the use of AI in cybersecurity should be regulated and how. They agreed that AI systems were beneficial to cybersecurity. Moreover, instead of creating specific regulations, lawmakers should seize the opportunity to reinforce existing cybersecurity regulations with specific provisions on AI.

Noushin Shabab, Senior Security Researcher at Kaspersky, highlighted the risks and opportunities of AI/ML systems in cybersecurity and presented the six ethical principles for the development and use of AI/ML set out in Kaspersky’s newly published white paper:

1. Transparency;

2. Safety;

3. Human control;

4. Privacy;

5. Commitment to cybersecurity purposes;

6. Openness to a dialogue.

For Professor Amal El-Fallah Seghrouchni, Executive President of AI movement (the Moroccan International Center for Artificial Intelligence), AI could indeed improve cybersecurity and defense measures, enabling greater robustness, resilience and responsiveness of systems. Yet, AI could also enable sophisticated cyberattacks to scale up, making them faster, better targeted and more destructive. Therefore, there was a need for ethical and regulatory considerations.

Cyber Diplomacy Knowledge Fellow at DiploFoundation, Anastasiya Kazakova, discussed how cybernorms for responsible behaviour could be implemented. Despite the need and the willingness of policymakers to regulate AI, she underscored the lack of clarity with regards to AI/ML operations. Defining AI also appears to be a challenge faced by legislators. However, Anastasiya Kazakova recommended that AI regulations should focus on the outcomes, and not on the technologies to align with users’ most pressing needs and concerns. In her opinion, cybersecurity vendors could play a role in promoting bottom-up approach in adopting self-regulation measures.

Prof. Dr. Dennis-Kenji Kipker, Expert in Cybersecurity Law at the University of Bremen, questioned the need for specific AI regulations for its use in cybersecurity. At the forefront of cybersecurity regulations, European lawmakers have avoided to name specific technologies in the existing regulations (NIS 2 Directive) or in draft legislations (Cyber Resilience Act).

Members from the audience expressed apprehension about the practicality and effectiveness of using ethical AI in defending against unethical adversarial AI. Furthermore, they emphasized the significance of identity in the realm of security and the crucial role that AI can play in safeguarding identities.

At the end of the session, participants agreed on the relevance of ethical principles in cybersecurity as they represent an important guideline that helps users of cybersecurity solutions to understand, assess and consider the use of the components for their own usage. A multi-stakeholder discussion on ethical principles for the use of AI in cybersecurity is now needed to establish international governance standards.