IGF 2022 WS #258 Governing AI & Education Technologies Transforming Education

Time
Friday, 2nd December, 2022 (09:00 UTC) - Friday, 2nd December, 2022 (10:00 UTC)
Room
CR5

Organizer 1: Velislava Hillman, Education Data for Data Saveguards
Organizer 2: Molly Esquivel, Concordia University, Irvine

Speaker 1: Priscila Gonsales, Civil Society, Latin American and Caribbean Group (GRULAC)
Speaker 2: Johnston Samantha-Kaye, Technical Community, Latin American and Caribbean Group (GRULAC)
Speaker 3: Emmanuel Chinomso OGU, Civil Society, African Group

Moderator

Velislava Hillman, Technical Community, Eastern European Group

Online Moderator

Molly Esquivel, Civil Society, Western European and Others Group (WEOG)

Rapporteur

Molly Esquivel, Civil Society, Western European and Others Group (WEOG)

Format

Other - 60 Min
Format description: Session total 60 minutes Format: symposium - presenters present each for 10 min and time is left for interaction with the audience. We will have at least one presenter (the rapporteur and/or the organiser) onsite to engage with the onsite audience and one dedicated presenter who will moderate the online audience's questions.

Policy Question(s)

1) Questions relating to the key stakeholders: children and literacy: What is the current state of critical reading readiness in the Caribbean and African region? (focus on key countries, including Jamaica and Barbados and two comparator countries in Africa). How are the voices of students being integrated into the development of critical reading readiness? What are the learning opportunities across each region (i.e., Caribbean, and African regions)? 3) Questions relating to governance of the AI/Edtech sector: (How) Can policymakers consider to adopt “relational ethics” as the starting point of examining, assessing, and evaluating AI-integrated EdTech systems in education? That requires that policy measures move away from enabling the development of predictive tools (with no underlying understanding)" for the sake of experimenting with mere innovation, to valuing and prioritising in-depth and contextual understanding of what such systems should be expected of them and why (them and not other measures). 3.1)The complexities that policymakers do not yet consider are on a number of levels. At the legal and technical level, educational institutions need to understand AI-integrated EdTech products’ functionalities. However, these functionalities may occasionally change, companies update terms and conditions, or even change ownership and jurisdiction, all of which poses further risk to mitigating emanating risks from children's data transfers and use. Our workshop addresses these challenges with actionable proposals for governance, monitoring and oversight of the sector. At a personal and pedagogic level, little is known about students’ and educators’ views and experiences with AI-integrated EdTech systems. What do students understand about these systems? How can children specifically and adults understand something they a) cannot see and b) is happening without their knowledge or consent? Do they trust something that they may not understand? Do they even have an opt-out from such systems? To this end, our proposal addresses the need for data and algorithmic literacies; the need for transparency from EdTech businesses about how their products work; and the need for policy and legislature to impose on the EdTech industry to comply with independent auditing and oversight as well as to be transparent and held accountable for when and if their products do not deliver up to their claims.

Connection with previous Messages: This session builds upon the key IGF 2020 and 2021 messages about ensuring that policy addresses the needs to safeguard children's basic rights and freedoms and especially in increasingly digitalised learning environments where private entities provide advancing education technology products and services.

SDGs

1.2

Targets: 4 (4.1, 4.3, 4.4, 4.5, 4.6, 4.a) , 9 (9.5, 9b, 9c), 12a

Description:

Artificial Intelligence (AI) and machine learning (ML) increasingly pervade the global education sector. Teachers have started to use AI tools in co-design and assessment, to detect student emotional states and even predict possible future career pathways. Some of the conveniences of having advanced education technology (EdTech) products and services from pre-kindergarten to higher education and training are clearer than others. Research since at least the 1980s (Turkle, 1984) has shown how intelligent toys encourage young children to revise their ideas about animacy and thinking. Children can attribute intent and emotion to objects with which they can interact socially and psychologically. In the classroom, children with disabilities (autism and cerebral palsy) can benefit from mobile applications (apps) such as Proloquo2Go, which converts icons and text into synthetic speech (Alper, 2017). By identifying early warning signs, faculty and teaching staff can provide timely retention plans. Explainable AI (XAI) offers hope for algorithmic models that are easy to understand and interpret rather than assume a sort of “enchanted determinism” (Campolo & Crawford, 2020). For example, decision trees are a type of AI modeling in which the rules can be explained to a human in a way that they understand or even have the ability to reproduce the decision themselves, critique or review and revise these rules (Khosravi et al., 2022). Nevertheless, behind these fast-evolving technologies are often powerful private entities whose primary goal is to consolidate power, make a profit and ensure that not only they have a solid grasp of private and public infrastructures (Tucker, 2022) including education (Hillman, 2022) but also have user-dependency on their digital products, whose very advancement, in return, depends on such human use. There is little to no meaningful governance and oversight of how these advancing technologies influence and impact education and children’s futures. Laws and policy globally seem to play a game of catch up with the advancing technologies, which underlines the power asymmetries between industry and state, and the typical knee-jerk reaction of governments with regards to innovations' risks and harms. The recent Human Rights Watch report investigated that despite measures like GDPR, the Age-Appropriate Design Code and the UN Digital Rights of the Child General Comment N25, EdTech companies still exploited children’s data and shared it with advertising technology companies and data brokers. This only goes to show that any laws and regulations put in place to protect children’s privacy and rights remain insufficient and inadequate. In this workshop we wish to 1) define the issues and risks relating to AI-integrated EdTech systems in education; 2) highlight the stakeholders and different challenges each experience and anticipate with the ubiquitous digitalisation of education in some parts of the world while rushing to close the digital divide; and 3) bring in actionable proposals for governance, regulation and monitoring of the EdTech sector as well as proposals for data and algorithmic literacies for those directly affected by AI-integrated EdTech systems in education.

Expected Outcomes

Policy recommendations on independent auditing, regulation and monitoring of the AI and education technologies sector. Policy recommendations specifically with regards to setting up curricula for critical reading readiness for children and youth. Working paper summarising research and policy proposals on governing AI and EdTech in education, which we will present during the session. The working paper will also include the policy recommendations and proposals from the audience, which we will gather by using the asynchronous tool TESTIMONIAL.TO.

Hybrid Format: TWITTER: each presenter will have easy-to-quote statements in our presentations and will encourage the audience to Tweet these. This will give us the opportunity to follow up on our audience after the session as well as it will allow us to disseminate and promote our discussions for a wider impact. LOOM: we will have a summary of our presentations with key arguments, questions and objectives pre-recorded and available on LOOM with audio and CC and other accessibility features. We will make these available for the audience prior to our session. This will give the audience already some advantage and thinking and opportunity to come with questions and their own thoughts and arguments. TESTIMONIAL.TO - we will provide this link at the end of the session for both on-site and online audiences to give their own feedback and thoughts in case they have not been able to add them during the session.

Online Participation

 

Usage of IGF Official Tool.