IGF 2022 Lightning Talk #2 A global framework to AI transparency

    Time
    Friday, 2nd December, 2022 (07:15 UTC) - Friday, 2nd December, 2022 (07:45 UTC)
    Room
    Speaker's Corner

    Women Leading in AI 
    Equality Now

    Speakers

    Ivana Bartoletti Founder and Chair, Women Leading in AI Network Author and Commentator, Visiting Fellow, Oxford Internet Institute Global Chief Privacy Officer, WIPRO Technologies www.ivanabartoletti.co.uk

    Onsite Moderator

    Emma Gibson

    Online Moderator

    Ivana Bartoletti

    Rapporteur

    Emma Gibson

    SDGs

    8. Decent Work and Economic Growth

    Targets: Unaccountable and opaque algorithmic decision making risks the softwarisation of poverty and existing discrimination, harms women and especially women of colour, threatens a more equal distribution of the digital dividends as well as inclusive economic growth. The focus on the introduction of a transparency framework will therefore support objective 1, 5 and 8.

    Format

    Lightning Talk

    Duration (minutes)
    30
    Language
    English
    Description

    This Lightning Talk will focus on AI transparency as a necessary action to win the world’s trust in innovation. The talk will start by demonstrating how outsourcing decision making to opaque algorithms will harm our democratic processes and erode trust in democracy itself. Bartoletti will then propose a framework for transparency with key tenets that ought to be enshrined in legislation and policy across the globe. The transparency framework will be rooted in the concept of accountability as a principle that binds humanity together especially as we strive towards equitable growth and economic development. Speaker has extensively researched the topic: - https://www.the-yuan.com/185/Transparency-by-Deliberation.html - https://medium.com/@reshaping_work/towards-ai-transparency-can-a-partic…

    Remote talk with online moderator

    Key Takeaways (* deadline at the end of the session day)

    already submitted via the google doc on the day of the session

    Session Report (* deadline 9 January) - click on the ? symbol for instructions

    This session called for the introduction of an international framework for the Governance of AI that intersected law, human rights and people's data. The speaker assessed that there is only a short policy window to bring in this framework, and that the window wouldn't be open for long, to contain and direct the consequences of the technological age.

    How do we update the framework around Human rights to fully apply them in the digital age? For example the Metaverse will blur the distinction between our physical and digital lives.

    Data is not a natural phenomena, we create it and have a choice about its creation. Data is not neutral, it mirrors society as it is and can inform policy making, and be misleading. Algorithms have the capacity to edit the reality we see. They edit, allocate and shape the world we see. Technology is being used to survielle us. And when data is used for planning and policy, it can be a problem for the most vulnerable in our society.

    We need to be transparent and people need to understand the risks and how to challenge its power.

    How to challenge it:

    1. Individuals need to become the co-creators of data, to decide when data is created or stored and when it's not.

    2. Equality needs to be established as an important principle- we assume that an individual should be able to understand an algorithm but they are applied in an opaque way, which makes them difficult to grasp and therefore challenge. Algorithmic decision making can automate existing inequalities so algorithms need to promote equality.

    3. We need a global 'kitemark' to signify that a system has been audited for fairness.

    4. there needs to be better allocation of the burden of proof when someone has been discriminated against. How can the burden of proof be re-balanced so that it's not so difficult to prove?

    AI has huge opportunities for human kind and global agreement is needed about how to make it work better. The public sector needs to do human rights assessments of it's AI applications.