IGF 2023 – Day 4 – Networking Session #153 Generative AI and Synthetic Realities: Design and Governance – RAW

The following are the outputs of the captioning taken during an IGF intervention. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid, but should not be treated as an authoritative record.

***

 

>> Hello?

Good afternoon.  We are going to start our network session about generative AI, and for this session, we will have different speakers that will contribute to our discussion here.  We have two online participants that is Heloisa cab dello, and Caio Machado from Oxford University.

We have Hoberto Brezano and so I will invite the online speakers to start our discussion so heloisa, could you please start your initial remarks?

>> HELOISA CANDELLO: Thank you, Doigo.  I will share my screen and then we can start.

Thank you so much for the.  I'm Heloisa Candello, I'm with IBM Research in Brazil.  I have a group called manager of human‑centered and responsible tech.  And we use ax I.  For the last eight years, I'm conducting and researching and ‑‑ I'm conducting and researching in the intersection of ATI and AI, particularly in conversational systems.  So pictures illustrates, the financial impact of financial initiatives within AI.

Okay.  In the area of conversational systems, we had several projects to understand the perception of text‑based machine outputs, for example.  In this first one, this is an example ‑‑ this is just a series of examples to look at the conversational systems and the main challenge that we are studying for a long time and now with large language models, how those challenges are enhanced and how can we take care of those issues that were before that, but with the new technologies, we have to pay more attention and think deeply how the impact.  Those new technologies.

So, for example, the first one I was mentioning was in 2017, and we measure how this was perceived by humans and chatbots.  So we did tests to understand about the humanness of machines.

And we worked with multi agents and agents and making investment decisions.  And then with the same platform, we did an art exhibition, where bots talk to each other and the humans talk to the bots and we did this in a cultural venue in Brazil, and the idea is to have the same platform the m mute bots we had before.  We had two characters of a book from a famous novel in Brazil.  And we measured how audiences perceived the interaction of the chatbots on the table.  So people typed and there was a projector that projected their answers that were designed and draw it on the table.

So we also looked at how the engagement was, how the chatbots, if they ask people for their names and they address it.  They use the direct address.  So this was something we also looked at.  We also did our work when we answer this one that before looking at the pictures.  Actually, they are talking to the paintings as well.  They are asking, oh, what's this yellow color?  And then the system answers what is that?  So we can think about now that we are going to have prompts as well.

And last year, we launched this exhibition in science museum in Brazil that children can teach the robots.  So they teach examples of how humans talk and similar examples of the same statement, so the robots can learn with them.  And we have a kit for the teachers to work in the schools with them.  And the last one is one of my research studies one of the research that we did with collaboration with a big bank in Brazil and we studied how people, they train machines, chatbots in the banks so those people were the best people that work within the call centers, the best employees, and they train Watson.  That's the chatbot there.  It's a room full of people to make sure that the bot.

will understand the clients.

And there are a lot of articulation of work happening there.  So how the creators, they interact with each other to ‑‑ to create those answers, the chatbot answers.

So we see that we can have ‑‑ it's full of challenge that we research and a lot of people researched it in the HCI community.

So we have, for example, errors, how can we minimize and mitigate errors?  We have turn‑taking.  If you have more than ‑‑ more than one chatbot, for example, we have the problem of interface humanization and how people can be deceived by Botts.

We also have the scope visibility in the time of conversational user interface because if the chatbot does not know how to answer, if you answer, I don't understand, please, can you repeat your question?  With the new technologies, this is not an issue, because it always answers something.

Malicious use as well, and resolutions of ambiguities, or something that those creators that I just mentioned, they ‑‑ they use it to do everyday.

Transparency and bias and harm, we will talk Baghdad this.

So with generative AI and the use of large language models what changes do we think?  As I mentioned they beginning, the scale is much higher, like the ability to ingest and process huge amounts of data.  It's ‑‑ it's huge compared to the conversational systems that we had before.  So we can have the same task assigned to multiple tasks and that is we could have an automation also, and also maybe different contexts of, for example, we had a client that worked with cars.  And for each car, they had to do a different chatbot.  We use the same models and use certain parameters and change the car.

And scale.  It can do, like, parallel communication, flewencing and stack reasoning and it can learn and continue to do models.

I will focus more on conversational systems, that's the main area that I came from.

So now we can think all of those challenges and we have the additional challenge of hallucination, and false and the safeguards.  That's why we are creating several platforms that we can control the models and fine‑tuning those models.

Misaligning of expectations so you have the human expectation and the ‑‑ actually what happens and what the model can deliver.  So generate contents that are not aligned to the human expectations.  We will talk in a little bit about vulnerability communities so we can just understand a little bit better, which kind of values we also will get.

And lack of transparency.  So it's difficult to inspect, because the quantity of data that is, there and also how the algorithm was made, yeah.

So for example, before we this exhibition that I mentioned to you.  It was an exhibition that we could have three bots and people could interact with that and what happens is if people, they type something that the bot recognize, the characters of this book, and it was a closed scope here.  It's not like an open scope.  It's just phrases from the book, statements.

Then one of the chatbots would say more coffee or something like that.  But in the case of generative AI, we have the hallucination and it always answer.  It's more reactive than proactive, right?  So we experience some projects where if you have interfaces, conversational interfaces that are more proactive, you have less errors as well, because this is more like a script conversation, and now this is not a reality anymore.  It's more reactive to prompt and if you have a prompt, design, you insert information and ask the system based on large language models what you want with more details, maybe we increase the chance that the system will answer you, what you want.

Automation, we talked about that, large data sets as well.  And the harmful language.  So in this case that I showed you, which was a public space, so we had, like, people ‑‑ we had character that was all women, all women, and we had, like, several ‑‑ not suitable language that was typed to the bot.  So everything that was typed on the tablets did not show on the tables, but the chatbots answered the phrase of the book.  We published this paper too.  So harmful language is there.  Now it can be more evident.

So going on that, I also mentioned this.  It was a project that we did and you can see that's 2017, and I brought them to see that it's the same thing in the way that now, we have conversational system that are more eloquent and can deceive people.

So people look at that conversational system with financial agents and they should save the financial advisor, financial agent was a human or a machine.  And then why?  And we saw that when people receives a text that they could see the typeface.  Agent, ‑‑ the scripts typeface, like handwriting typeface, they said, over, it's a machine anyway.  So most of the people, they said were machines but this one wants to deceive me.  So until ‑‑ what's the limits to be human?  So this is one thing that we can think.  And one of my favorite books is "The Most Human Human."  So Brian Christian, actually, I will refer to him later.

He started looking off at people that pretend to be a machine.  He looks at the qualities that humans should have to be humans.  So what are the qualities that describe the human.  So we maybe should look and pay more attention on that.  Yes.

Okay.  So when we look at that ‑‑ at transparency as well, and ‑‑ and if it's a human, if it's not a human, we maybe should think about communities that have access to education, to AI education, to technology education.  It's not so close to them.  So what they have for example, this is a community in Brazil.  It's a small business that is women, and they have access to technology, because they have mobile phones and you can see this mobile phone, they use mobile finds.  Actually, they play in several installments.  So they have this and their contact is with WhatsApp, for example.

So we did an experiment with them.  And we asked them, what question does an AI need to answer to be useful, effective and to be trustworthy and respect the human right, and we asked that.]

So what is this system?  They are part of a financial education course as well.  It's an NGO and when they enter the course, they answer a questionnaire.  When they leave the course, they answer a questionnaire and after three and a after six months, they answer another questionnaire.  So what we did, we worked with the NGO and we add those questionnaires and we redesign the questions to add in a chatbot.  And those women, they answered.  And while they were answering, they answered questions about women empowerment.  They answered questions related to business growth as well and about revenue.  But the main thing about this system was to extract some educators to measure the social impact of the program.

So we used this with them.  We tested with 70 women.  And as an output for them, they could see how is the health of their business ‑‑ their business health.  So we had, like, a scale and they could see that.

But then when we tested with them, we had several that had like zero, for example.  And why is zero?  One says this result means nothing to me.  It will not I will continue to engage many my business.  The index was zero because my business is not running.  I'm not going to say it's dying, I'm going to say it's being born.

I would like to know my advertisement:  So I can talk a little bit about that.  Before about the zero, it's important because for some of them, it was like not exciting even they have frustrates to see zero.  And so doing it to understand, one of them, the husband paid ‑‑ the ex‑husband paid the rent and she count that in the expense but in the end, she had profits as well.  So those things that are ‑‑ so little, but it makes a lot of difference, because they are entering in the context.

Other things that women wanted the chatbot chose to tell them, I would like to know how it's my advertisement and if I'm on the right path, what are the recommendations?  We asked about the future, and this was something oh, he like this.  I want to continue answering this because it makes me reflect about and it means, for example, oh, this score, I can improve.  Yes, but I don't have instructors.  Maybe they are not in the stage that they feel well about that, yeah.

So I think AI, and some mistakes education.  So we talk about the terms education and polite, it's a word that's similar in Portuguese.  And religion is an interesting fact.  The NGO we said, oh, should we take off this question?  And they said, religion is one of the main things that they disagree about because they are in the same economic status, but then we have people different religion and put them in the same WhatsApp group, and then usually we have friction there.  Okay?

So how can we legitimate what the chatbot answers?  So maybe in the future, this is one provocation paper that we did.  We could have a score for each kind of generative system, and with this score, we can see how legitimate this is, how transparent it is, and where is this data came from.

Right?

So we used closed scopes and closed domains to avoid hallucinations or at least mitigate a little bit of that because at least it's for the client.

And the third one that I would like to mention ‑‑ I'm almost finishing.  We have the expectation alignment that I mentioned.  So just another one.  So if we have generative systems, how the values of people, those are the values that we collected in the field.  Could be aligned to the values that we have from the other stakeholders as well and AI is there in the middle.

So here is an example of call center.  For example, we expect productivity, fast performance, speed, efficiency, faithful, and we need all of that, but then when we look at the model, we need to choose the models that are aligned to that.  So we want a model that reduce hallucinations and that has the data representation of the public that is going to use, right?

So I'm going to end.  And I'm going to wind that.  We have some design principles to think about how can we build generative AI systems in a responsible way?  So thank you so much.

>> DIOGO CORTIZ: Thank you, Heloisa, and sharing your wonderful work with us.  So we had a view from the industry.  So now I invite Oberta to bring your perspective.  Please.

I think that you should use mic for online people.  Thank you.

>> Thank you very much, Doigo.  It's a pleasure for me to be with this distinguished panel and ‑‑ sorry?

It's okay, right?  It's listening.

Okay.  What I ‑‑ I think it will be nice ‑‑ I totally agree with Heloisa about her intervention and I would like to switch a little bit my comments regarding how we merged specifically, of course, during the generative AI, since we have are artificial intelligence for many years now, in different forums like using translators, when we have image recognition, software, and different other ways of using different forms as well of AI.

But I think one game changer, indeed was ChatGPT and it's not because there isn't any other tools.  There are many, but, of course, this one was ‑‑ I would say the initial that was presented, I think it was in October last year, and in a matter of maybe weeks, many people started to use it, starting to be a thrill of using this tool, and then spreading the word, and in terms of, I don't know, maybe weeks, it passes from thousands of users to hundreds of millions of users.  So this one, indeed, it was a particular phenomenon to analyze, I don't remember any other tool that was very, very rapidly penetrate to society.  And I will say there's a factor that perhaps was included regarding the use of this tool.  It's not because of the fact that many people already use different bot, but in this case many people were experimenting and then everyone started to use it for many, many other activities.  I mean formal activities and now in the academic world we can talk about cheating or presenting elements that are not necessary developed by academic students, learners, et cetera.

But I will say that many people felt that this tool was really without limits.  Again, I will say that it can be applied in different ways.  Now combined with some other forms of AI, actually, there are people that are even making money now.  They make ‑‑ they found this as a way of making money.

I'm a teacher for the last 20 years more or less at the university.  Mostly in IT‑related subjects and as happened with some other areas in our case, the teachers, the students when they ‑‑ with when they learn about this stool, they were excited.  The people that was encountering this started to want it to formally tell the others about this and then started to organize, webinars, seminars and things like that in a way trying to call them as experts in this field.  Many people doing things like that.

Just because they use it, and they discovered this fantastic tool and wanted everyone to know and use.  I think that's another important part that we need to reflect on.

The other comment I wanted to make is that, yes, AI is with us for several years now, but maybe the ethical aspects, the regulatory framework has been discussed.  I would say maybe the last five years and I can witness that because I was a member of the MAG during the last three years.  Last year was my last year as a MAG member and then I had a chance to see how the discussion regarding the regulation of AI was evolving as well.

And then it reaches the academic sector regarding all these possibilities or maybe even negative impacts that this may cost.  And this is something and I think we are in that moment now, back in Bolivia and perhaps in the region or even in the world, with, again, different parts.  I mean, different sides of the coin.  People that feels, again, this is like the devil and we should try to avoid it.  Maybe we should try to prohibit the use of these tool, because they are teaching bad things to our learners because the learners are trying to do or pass for persons that they are not, et cetera.  You understand my point regarding this and then, of course, there's the other side that actually would like to have this more evolved.  And when we talk about regulation and we talk about adjustment of maybe policies that apply even in the academic sector, I think that shouldn't be the way.

And I always like to put this example.  I know that we should respect the difference of the scenarios, but if we remember back '70s, '60s, maybe no one here will remember that moment, when we were using the sliding rule.  One the things we required from the students was how to manage that kind of tool, right?

But then the pocket calculators appear.  So immediately, of course, it was important to adjust the curriculum and they start to evolve in what was a need for learners to learn.

I think that's the kind of reflection we need to do at the universities, not about prohibiting the use of these kind of tools but adjusting what skills, the newer skills we need and we want for our students to have in the near future, knowing that now we have tools like this one that are, of course, going to reduce a lot, many, many.  Activities in terms of thyme, of course and, of course, our teacher and the academic community as a whole.

So I will stop there.

>> DIOGO CORTIZ: Thank you.  So we had views from industry, from technical community, and now I invite Caio Machado to give our view from the civil society but also the academia.  Welcome Caio, and the floor is yours of.

>> CAIO:  This is an opportunity to network with the folks over in Japan.  So if anyone wants to reach out, I would be glad to continue our conversations later on.

So I hope you guys are seeing the slide okay?

>> DIOGO CORTIZ: Yes, yes, it's great.

>> CAIO:  So my concerns when we talk about generative AI and the title of our talks, synthetic realities, let's lay down a premise here, I think of issues related to artificial intelligence in three major layers, the data quality of the data, diversity of the data set, whatever was used to train and develop the models, the engineering of the models themselves and the final layer, which is deployment and that's when we get a tool, throw it into society and then it behaves in ways that are unexpected.  I think a great case for that and it's a cliche case, it's algorithm, it's not even AI, from what I understand, is a compass case where algorithmic tools were used in certain states, in the United States.

And for on the one hand, it's biased and we have an issue in terms of the data and the development of that tool, but also judges started using something that was intended to attribute risk to the defendants and use them to Derryl the si ‑‑ term the severity of the sentences.  So what was intended for one purpose, once it was thrown out into the world, people incorporated it and it was embedded into society in different ways.

That is harder for us to foresee and I think that is an issue that is much greater than we are discussing.  I do agree hallucination error is a severe problem but we're not thinking as much about once the AI is out in the world.

For example, I know that lawyers, judges around the world are using generative AI.  What is the impact of that when a judge decides to pay $20 a month to use ChatGPT and all of a sudden ChatGPT is deciding the cases and making a precedence?

So I think that is a big concern.  And the second thing, addressing synthetic realities, not so much the fabrication of extremely realistic content, which I acknowledge deepfakes and son.  I think that will be addressed in the midterm with new mechanisms of developing trust.  What I'm really concerns about is these tools become access, the same way we use Google to access results.  And depending on the words, you get different results for where dinosaurs come from.  It could be an evolutionist theory.  When you have a chat doing that, and everything is compacted into a single answer, what sort of tools do we have to second ‑‑ to double check that?  And to equip the users to be able to fact check and get different perspectives.  I think in the sea of information we have, the eye drop is getting smaller and more complex and less transparent.

I think that plays a big role in creating distortions in our reading of reality.  So speaking of disinformation or even malinformation, I think these tools and the lack of accountability around these tools and how they operate can have severe effects that regard.

I'm trying to be click so we can all speak.  That refers back to the previous speaker, previous, and accountability.  I think there's little debate on how we can ensure at the develop development level.  And so ways of keeping people from using the AI tools for inappropriate uses.

I'm throwing this issue to the engineers.  As a lawyer, I can throw it to the engineers.  You think of solution.  This was something that I was discussing with the school of engineering, and how can we have the user think how the AI is being deployed and that speaks to AI literacy and tech literacy in general.  And finally, just to point to some of the work that we are doing right now, academically I'm at Oxford but I'm also a fellow at the school of engineering here, learning a lot with the engineers and we're thinking about the uncertainty around different models of machine learning, where, okay, you might have 95% of accuracy, across different models but then you have that 5% where you are getting predicted multiplicity, and what do you do with these people?  And would has the legitimacy to decide what should be done.

You can look at the the work of several professors who are really going off in this topic and we're working together.

And my ‑‑ for me the fundamental question here is, okay, there's a whole section that algorithmic tools, a section of the population or users or you name it of the data that the algorithmic tools don't know what to do with.  And who should be able to decide?  So far, obviously this is being answered by the team developing those models.  But once this is deployed in society, the effects aren't restricted to code.  These have social, ethical effects, which perhaps should be discussed in other spaces as well.

With that, I will conclude my teach, and thank you, once again for having me.  Please feel free to reach out so we can continue the conversations.

>> DIOGO CORTIZ: Thank you Caio.  We have more ‑‑ five minutes.  So I invite Mateos to give his contribution to the session.

>> Amazing.  Thank you so much, Doigo.  So hello, everyone.  My name is Matros Petroney.  I go to the University of San Paolo.  And I'm in the field of computer design and artificial intelligence and I'm engaging the user experience design.

So just a few things here to bring more of the user‑centerric perspective and also to not repeat with the other remarks that I'm aligned with, okay?

So on one hand, there's a plain of expectations concerning the benefits of these advancements.  Even with the content generated by AI considered as synthetic realities.  It has the potential to overcome longel tending challenges within the usability domain.  The enhancement of engagement through personalized experiences, and more accessible to obtain knowledge.  This potential value extends to diverse domains such as education, how to care, well‑being support services, digital communications and even customer support.

The human like AI techniques show cased in specific chatbots is a prime illustration of this.  The Meta recent chatbots is a case in point.

They aim to Roy users with available devices with the realms of the celebrities expertise.  In doing so, it significantly broadens the scope of engagement.

On the other side, despite the promises that these innovations hold, numerous concerns deserve our attention.  In a world where the digital content could be created entirely by AI in the next few years, facilitying these concerns becomes imperative.  This underscores a governance and technical challenge but also a design one as we need to allow users to use a small mobile screen and color typography or other elements that help them to get informed to make better decisions regarding its utilization.

We must remain vigilant regarding the potential dangers establishment of effective bonds with such technologies.

Users may inadvertently develop strong relationships with chatbots which could be if they become overly reliant on them.

In this realm of education and mental health support, such attachments could compromise social and learning skills and the significance of sharing experiences with peers, and families and the surrounding community.

Beyond that, if we start to prospect a little bit more about possible futures we could consider the possibility of users simulating their own presence to chatbots on social media platforms.  We need to have a critical what is an iterant human, such as having a unique personality how we can look at the capabilities of these emerging technologies, that the current state of AI may not or should not deliver.

I believe there's huge improvements by the generative AI, sometimes to indicate to the user about its nature and sometimes to prevent the content that threatens human right and propagates misinformation.  The same case of artists being used for personalized chatbot could be applied for artists performing deep fake videos with harmful, which is increasingly prevalent.  I invite you to reconsider the significance of presenting a sanctioning along side the user interface and this emerging challenges require collective efforts from government, society and research to safeguard the democratic balance.  So that's it for me, thank you so much.

>> DIOGO CORTIZ: Thank you.  So we had inputs from different stakeholders group and now we have time for just one question if someone wants to ask a question.

Yes, please, you can go to the mic.

>> AUDIENCE MEMBER: The mic is on.  Thank you very much.  My name is Valerie.  I'm representing a student here.  I'm a master degree student.  So my question and maybe one point that I would like to speak about is how the generative AI can be used for the crime and cybersecurity.  As we all know now, we can generate images.  We can chat with the LLMs, my thinking is like now we can mimic voice.  And what's stopping the bad people or the ‑‑ really the people would want to do the harms using those tools to, for example, generate somebody's grandma voice or to generate my voice, and call my parents requesting for money or something?  Mostly related to that.

So I'm just thinking this is the point that needs further discussion and maybe regulation?  Like, how are we going to deal with this possible crime?  This is going to be in my eyes extremely fast growing in a couple of years when the algorithms will become much more efficient and the output will be barely recognizable by human beings.  Thank you.

>> DIOGO CORTIZ: Thank you, Roberto, do you want to start answering.

>> Sure, I will go back to my previous point.  I will say it's really, really hard to start thinking that regulation is going to solve everything, or if we are going to come up with some creative ways of dealing with those kind of examples.  Everything needs to change now.  We need to adjust to this new reality.  I can talk about the academic area.  I'm not an expert in the crime, of course.

But I will say, just take an example that it will be hard now to consider that one image in a voice is concrete evidence of a crime due to these new possibilities and that is now fixed in the laws in our current laws.  And so that's scan.  Things that need to be changed based on that reflection and I will say that that will have to be in all different areas.  Thank you.

>> DIOGO CORTIZ: Okay.  So Caio, please.  The floor is yours.

>> CAIO:  Yeah, just to quickly compliment, I mean that's already a reality in the US, for sure in Brazil, the use of deep fake voices to run scams over WhatsApp in Brazil is very, very common.  That's something we need to deal with.

I think we can look back at the knife.  We had knives around for thousands of years and still we created laws and that hasn't prevented people from stabbing each other.  Meaning the tool is around.

It will be used for good and for bad.  I think that the policy not only crime as in regulation, market regulation, all sorts of rules we can think of need to be addressed to limit the circulation of these tools in whatever context they are used for criminal purposes, increase traceability, increase we should promote, so public policy for most digital literacy.  Sorry, it's late here.  And get people to mistrust the audios and have other means of checking.

So it's more of an ecosystem solution, than passing one rule that will outlaw the misuse of deepfakes in voice and you name it.

It's a series of initiatives and rules that we need to promote.

>> DIOGO CORTIZ: Thank you, Caio.

So our time is over.  I would like to thank all the speakers and the audience and this session is closed.  Thank you.