AFA
AFA

Creating a Global Standard for AI Ethics

E23 | With UNESCO's Clare Stark
Updated Nov 16, 2023

Creating a Global Standard for AI Ethics

AI For All
|
E23
November 16, 2023
On this episode of the AI For All Podcast, Clare Stark, UN Coordination Officer at UNESCO, joins Ryan Chacon to discuss the prospect of a global standard for AI ethics. They talk about why AI ethics matters, who is responsible for ethical AI, whether government can keep up with AI innovation, ethical issues specific to generative AI, deepfakes and misinformation, whether governments can be trusted to regulate AI fairly, open source and emerging technology, and the future of AI.
About Clare Stark
Clare Stark is a UN Coordination Officer in the Priority Africa and External Relations Sector at UNESCO, where she is responsible for enhancing UN system-wide coordination on emerging technologies, including on artificial intelligence, biotechnology, and neurotechnology and developing strategic partnerships with UN entities and the private and public sector.  She co-chairs the High-Level Committee on Programmes Interagency Group on AI, bringing together 40 UN entities to strengthen the ethical development and deployment of AI to support the 2030 Sustainable Development Goals.
Interested in connecting with Clare? Reach out on LinkedIn!
About UNESCO
UNESCO is the United Nations Educational, Scientific and Cultural Organization. It contributes to peace and security by promoting international cooperation in education, sciences, culture, communication, and information.
Key Questions and Topics from This Episode:

Transcript:
- [Ryan] Welcome everybody to another episode of the AI For All Podcast. I'm Ryan Chacon. Our co-host Neil Sahota is not with us today. He may jump in later but as always is our producer, Nikolai.
- [Nikolai] Hello.
- [Ryan] Today's topics are very exciting. We've been planning this episode for quite a while. We're excited to have a discussion around these topics. We're going to be talking about ethics in AI, deep fakes, bias, global governance of AI. And to discuss this, we have Clare Stark. Clare Stark is a UN Coordination Officer in the Priority Africa and External Relations Sector at UNESCO, the United Nations Educational, Scientific, and Cultural Organization.
Clare, thanks for being here this week.
- [Clare] Thank you, Ryan. Thanks for inviting me.
- [Ryan] Tell us a little bit more about what you all do or what you do for the UN and kind of the organization, the part that you're with and all the good stuff there.
- [Clare] Sure. So I'm the UN Coordinator focusing on emerging technologies. And what we are working on right now is, you may have heard, UNESCO had the first global standard on the ethics of AI. It was an ethics of AI recommendation. It was adopted in November 2021 by 193 member states. So this was basically the first global standard that we had on ethics. So it covers a number of principles around data privacy, the importance of fairness and discrimination, issues around ensuring that AI is sustainable, and the importance of having human oversight and determination in AI.
And so it looks at AI through the whole life cycle. So this was developed with a number of experts from our bioethics committee. And we are currently now working on implementing this recommendation. So we're implementing it with 50 countries around the world. And we've developed two tools to support the implementation because we now want to translate this recommendation into national legislation and policy. So we've developed two, we've developed a diagnostic tool, which is an AI readiness methodology, which looks at where countries are in terms of AI. What are the capacities that they have? This is looking at infrastructure. Do they already have AI strategies in place? We found, for example, that only 67 countries currently have any kind of national legislation or AI policy. This was one of the reviews that we undertook. So we're supporting countries in developing that assessment. And then we've also developed an ethical impact assessment methodology. So that's basically for AI procurement. Looking at issues around why you're developing the AI. Has the provider that you're working with responded to certain questions in the ethical impact assessment. Even you may have seen the European Union's AI Regulation Act. They also mentioned for high risk AI technologies undergoing an assessment. So these risks assessments are becoming more pervasive.
- [Ryan] Fantastic. And I'd actually like it if you could talk a little bit more about that recommendation and kind of, you high leveled it for us, you gave us kind of some areas of what it covers from a topic standpoint, or maybe what it touches on, but to somebody who is coming into this space, maybe following along that there are a lot of conversations around policy or on regulations in AI, what is, what are some of the specifics that are worth mentioning about this recommendation that kind of everyday people should understand and take away from knowing this is even going on?
- [Clare] Well, there are a number of ethical principles out there. The point about having the recommendation is really to have some comprehensive ethical principles that could be adopted around the world. So this is why we have these 193 countries that have signed up. And what's interesting about the recommendation is that it has a monitoring mechanism within it. So this is what we're starting to do right now. So we're looking at countries' baseline, and we're looking at issues, as I said, does the country have, for example, the data privacy? We've known that a number of countries are developing their own data protection. You've heard about the GDPR, of course. So a number of countries are starting to realize that they need to have data protection laws in place. So this is the one of the first things we look at, is there data protection laws in place? And we've seen that countries are coming from very different baselines.
Of course, countries from the West tend to be much more advanced, but we're doing this a lot in countries in Latin America and in Africa where we're seeing that the baseline is quite different. Sometimes even there's some issues in answering some of the questions. And we're also trying to identify where are they looking at actual AI strategies? So is this part of their digital strategies? And which ministry is actually responsible for developing the AI strategy? Because we see the pervasiveness of AI across sectors. Identifying where this is within the country and then trying to have kind of this pervasive sectoral approach is what we're looking at right now.
And as I mentioned, it does go through a number of principles we're looking at issues around gender equality, making sure that, as we've seen over and over again, I think also just with generative AI, we've seen with the example I think was Stable Diffusion, did a recent study about how generative AI was replicating a lot of gender stereotypes.
I think they only gave 3% of women were judges, where I think it's something like 35%. I'm just trying to think of these figures off the cuff. But we can see how this is replicating gender and racial bias through these images. So this is something that the recommendation really aims to address.
- [Ryan] For people that may not understand all the reasons and value of why this is so important, if someone was to ask you what is the importance of ethics in artificial intelligence and having standards and regulation around it, how would you answer that question?
- [Clare] I think ethics is really important because it affects our lives. We've seen a number of cases where we're using artificial intelligence for hiring purposes. We're using AI to decide what, which people may be guilty, which people may be perpetrators. And so if you're being misidentified, this can have serious repercussions for your life, for your livelihood. But also I think we're seeing that the issue is also around, I think Goldman Sachs said something like 300 jobs may be at stake. So how is this going to affect our jobs? How does this affect how we are learning about artificial intelligence?
And as jobs become automated, those are low skilled workers that are going to be out of, and they're already out of the jobs. So what are our re skilling and up skilling programs for these type of people, and I think also we should take it into account that we have almost 3 billion people that don't have access to the internet around the world. So these people, their data isn't part of the ecosystem. So it's a very Western centered approach and worldview, and I think we need to have a diversity of worldviews, and that's why we talk about the importance of ethics in artificial intelligence.
- [Ryan] How have you seen the conversations with Western organizations and individuals compared to others around the world? I know you mentioned you're working in Latin America and Africa. How have those opinions and views on ethics differ from those conversations that you've had exposure to?
- [Clare] We've just recently begun the conversations in Africa and in Latin America. We're working with the Latin America Development Bank, the CAF, who's supporting our work and particularly with Costa Rica in developing their national AI strategy. So it's, we're still a bit in the initial stages, so it's still hard to say what are the differences and the differences in approaches. But you could already see in terms of languages, for example, if you think about the indigenous languages that you have, for example, in Latin America, are these people being adequately reflected in the data, how are they benefiting or not benefiting?
So I think these are some of the issues that we're still trying to assess and to grapple with. We'll be having a conference in Slovenia in 2024, a global AI ethics conference. So there we will be providing all of the information from the countries that we're currently working to implement the AI readiness assessment methodology.
- [Ryan] When companies are out there looking to adopt AI solutions and tools to affect, to do different things, whether it's hiring, performance reviews, other types of general jobs and tasks, who do you, who's responsible for ensuring that they're being held to ethical standards? Is that the organization adopting the tool? Is it the organization creating the AI tool and solution technology? Who's, where does the kind of responsibility fall for those to ensure that the practices that are being utilized with these tools or the tools themselves are following these ethical kind of guidelines and standards that are being set?
- [Clare] I think this issue with, there's been a lot of discussion about private companies and voluntary application of ethical AI principles and of course there are different approaches around the world but really governments are responsible for their citizens and ensuring that private companies do adhere. There's been this call, as you may have seen, from the Secretary General about having some sort of global AI body, UN sort of watchdog. And so different ideas are being discussed about what this would look like. So would it be similar to the IAEA, which is about regulating nuclear energy. Could we create something similar to a global body to regulate AI? This is still being discussed and there's different proposals on the table. This is one of the issues in which UNESCO is looking at, and we're supporting the work of the Secretary General's tech envoy on that. And I think why it's important that private companies are, and there are a number of private companies, we had some discussions with Mercedes, Mercedes-Benz that's looking at developing AI, applying ethical AI in their practices, and they've developed a whole kind of process for doing that.
It's still, if you leave it to private companies, they have different ideas about how to do that. So we really think it's in the government's, it's the government's responsibility to ensure that it's adequately regulated.
- [Ryan] How do you feel about government's ability to keep up with tech innovation? Because AI is obviously a big area that's moving very quickly and fast, and oftentimes in the past with technology, government lags behind the innovations, the speed at which the innovation in a certain new, newly introduced technology happens. How has that kind of been handled in AI, especially just over the last less than 12 months or so just have, we've seen an explosion in generative AI and a lot of other kind of tools and topics out there. How have you seen the government's ability to keep up with the changes or how do you foresee that being something that can be handled as this industry continues to rapidly evolve?
- [Clare] Yeah, I think this is a really interesting point here because we've seen this raised a number of times that governments aren't able to keep up with the rapid pace of technology. Now we're seeing, for example, neurotechnology going to the market and which is not well regulated, right? So especially as neurotech products go to the market and this could potentially be your brain data, these are some of the issues that I think we're grappling with and the need for really anticipatory governance. And what does that look like? And that means changing the current modus operandi of traditional government models. And I think also making sure that, as we've seen with Sam Altman and others, there needs to be more connection between technologists explaining to government what the technology is about. And so there needs to be more learning on that. And I think there has been an uptake in governments, and we've actually developed a policy tool, which looks at sort of the competency for public servants on AI, but this needs to become more generalized I think around the world so people can become more AI literate and understand some of the issues.
But I think as we've seen, for example, with the EU regulations act, they were able to, they hadn't included generative AI originally in the act, right? So then they had to revise it and include generative AI. So it has to be adaptive. Any kind of governance model has to be adaptive.
- [Ryan] 100%. Yeah. Generative AI is a really interesting topic because we've talked a lot about it on past episodes of just the different elements that people need to be concerned about when it comes to it. Are there any ethical issues that are specific to generative AI that you all have been focused on or you've seen come up or that you feel like really need to be addressed or thought through?
- [Clare] I think, yeah, there's a couple of issues. Of course we've seen issues around hallucinations. I think they've just recently put a disclaimer that you should be careful about getting the information. I think Yann LeCun recently gave a talk about some of the, how generative AI should be used and for what it should be used and for what it should not be used. And even that children under 13 should not be using generative AI without supervision. But some of the issues that we're looking at at UNESCO is also the use of generative AI in education. Of course, this was a big issue. Are people using it to cheat on exams, to write their exams? And then other people have been writing applications to determine if you used ChatGPT to write your exams.
How can, I think this technology is here to stay, so we need to identify how we can use technology for higher level learning and in what context can we use generative AI to help us develop curricula, but curricula which is based on material that has been appropriately reviewed.
- [Ryan] It's really interesting just to, over the last eight months or so, just to watch the growth and conversations that have come up around generative AI and all the things, like you said, hallucinations, the bias, the just all the different elements that play into this that are influencing the outcome of interacting with these tools that people are using these for their jobs, they're using this for educational purposes, they're using it for so many different reasons, which I think it, I think at the on the surface is a great thing, and it's a very versatile tool and people are learning this technology and trying to understand it.
But there are a lot of pieces that I don't think everyone is so completely aware of the, especially the hallucinations part, the bias that a lot of these tools can have. I'd be curious to hear your thoughts on how you feel like you can address that? Is that, it just seems like a challenge to be able to completely eliminate it. What has been talks around how that is evaluated?
- [Clare] Yeah, I think this is an interesting question because we recently did this study where we looked at how many, we reviewed about 450 schools and universities on whether they had any generative AI policy. And we found that really less than 10 percent had any kind of generative AI policy. So I think, one of the things that we are proposing also is that, for example, with our ethical impact assessment, that you need to have a review ex ante and an ex post, right? So before you launch any kind of similar large language model and then having the research also about what are the repercussions, so during the life cycle of the model and then after. So I think this is because we're launching some of these things without fully understanding what some of the impacts, and I think also more transparency, we need to understand how the models make their decisions and this needs to be available more readily, some is on open source platforms but this needs to be, and especially as we see coming elections, 2024, we will be having difficulty differentiating between what is real and what is fake.
- [Ryan] That brings up a great point of like deep fakes and just generally understanding what deep fakes are, the implications on democracy, how they're going to influence things because they're becoming incredibly sophisticated. With so much going on in the world, with in the US, huge election coming up next year, they're going to play a pretty big role in things just like social media has over the years, especially last election. A lot of stuff that's been learned after the fact. But when it comes to deep fakes itself, what can be done about that, if anything? Like how can you regulate, how can you set up any type of guidelines to help eliminate or avoid that, which again, because a lot of stuff that was learned last election regarding social media's influence and what companies were doing with access to data and fact checking data and all this stuff, there's been a lot of work to try to regulate and make sure that doesn't get out of hand. When it comes to deep fakes, this is like a whole nother level because of what is possible. From your perspective, how do you stop that? How does this, because this is definitely can have a huge impact on the world, democracy, and so forth.
- [Clare] As we've seen too, there was this push for content moderation, but now we've seen a lot of people have been laid off on some of the social media platforms, so you're not having the content moderators that you had before, and you're seeing those that have promulgated anti vax being allowed back on some of the social media platforms, for example.
So I think that's why again, I think we have to have, I think the, really the government has to step in. UNESCO developed these, we had this internet trust conference recently. And this is also a big, a priority also of the Secretary General, who's looking at a code of conduct. So looking at ensuring integrity of internet platforms because this very much goes to this whole issue of ensuring access to information, especially during election times. And I think we're seeing that there can be an increasing amount of violence. There's, obviously there's issues about watermarking. The need to watermark all of these images to identify what is, what has been generated through AI. But then, these fact checkers having content moderation and ensuring that these social media platforms are held accountable. So does that mean having some kind of perhaps global regulatory body to monitor these internet platforms.
- [Ryan] Sure, I think anytime you bring in like these fact checkers and moderators and stuff, you're dealing with humans, and you're dealing with potentially levels of bias that are oftentimes not even realized on how they do their job. So I think there's a challenge there. But I also wonder if, thinking like globally, it's a little bit different, but just thinking like in the US itself, if the government was in charge of trying to regulate these and decide who can do, post what, who has access to these channels and so forth, I feel like it would be very difficult to convince the public that the government has the, our best interest at heart without any bias or agenda being into it because they know this is a tool. This is a tool that can be used for their benefit if and if they're the ones dictating how it is used, I feel like there would be a lot of criticism and a lot of skepticism that this is something that is being done fairly and adhering to like free speech laws. So I'm very interested to see how this evolves. I just have, don't have that much faith in the government not finding a way to use this for their benefit, whichever group's in power at any given time.
- [Clare] Yeah, I think that's, yeah, that's really a risk. We talk about the importance of media literacy and being able to critically analyze the news that you're seeing, but that's becoming more and more difficult. So this, I think this is an issue that a lot of people are grappling with right now, and I don't think there are any easy answers.
- [Ryan] The more technology out there, the more people are going to figure out ways to use it for positive and negative ways, and I think it's always going to be tough to continue to fight that battle of how do we stop the bad stuff from happening?
I
- [Clare] think that's true. This dual use of technology, I think we've been grappling with that for decades.
- [Ryan] I think we always will. I feel like we always will because it's just human nature.
- [Clare] Yeah, that's why I think it's interesting also this discussion about whether everything should be open source because of course, when you have it open source, then people can use it for whatever kind of purpose they wish.
So how to, because of the pervasiveness also, which is different, when we were talking about the, what could be a possible AI body, for example, with nuclear energy, not everybody has the capacity to create nuclear energy. Whereas AI, it's much easier for people to use it and to manipulate it.
So it's, it's a different, it has a different character. But then I think also as we're seeing advances with quantum computing or areas, and I was reading about these xenobots, which are new lifeforms, and that you're able to manipulate these lifeforms. Robots that are alive, that don't currently exist in nature. So you can see the amplitude I think that some of these converging technologies can take and the importance of really grappling some of these issues very quickly.
- [Ryan] Yeah, it's like we had the conversation around like general AI and just like what that could potentially become and where things are headed. A lot of excitement around potential, but also there's a whole other side to it that I think we need to be paying as equal attention to is what are the drawbacks and the downside for humanity and what this could or could not do if we just unleash it. Where do you see AI within the larger like emerging technologies ecosystem? How do you see it continuing to influence things and grow just looking forward from here? Where, what is your thoughts on that?
- [Clare] Yeah, I think that AI has just allowed us some incredible leaps in terms of research and development with the protein folding and developing new proteins that don't exist in nature or being able to contain nuclear fusion, which could have huge implications for renewable energy and for dealing with our climate change crisis. It does have a role to play in helping us to achieve the sustainable development goals. But I think in terms of, and we're seeing, of course, AI and its use when we're looking at neurotechnology, but then that even goes to another scale. And as we become, we have more computing power, we can see that the AI will become more powerful. So I think this is also the existential question. I think these are questions that are still being discussed, but I think as we're seeing this merger between biotechnology and artificial intelligence, we could see maybe in the future the sort of possibility.
- [Nikolai] How do you think about the fact that ethics across the world obviously varies, and there's definitely contradictions across the world. So how do you, if you're trying to create an ethical standard for AI, how do you navigate that? How do you think about that?
- [Clare] When we're doing, for example, our readiness assessment methodology, we work with national experts in the country. We do have this experts beyond borders or without borders. We do have this network of experts that we utilize when we're doing that. But we also work with experts in the countries because I think that's important to realize that there may be different cultural contexts and approaches. But of course we also take, our ethical stance is very much based on human rights. We use the Universal Declaration of Human Rights, and that is our guiding compass, right? So when we're talking about gender equality or ensuring fairness in AI systems, this is our baseline, if you will. Everything is derived basically from the Convention on Human Rights but while taking into consideration that we do need to apply it in different contexts in different countries.
- [Ryan] What do you focus most on? For your, from your perspective over the next like six to eight months, what should people really be paying attention to in the AI space? Are there any areas that you feel like deserve more attention or topics that deserve more attention from just the general public? They should be really paying attention to, thinking about, considering as we get through this year into next year. Like what should, what are you most looking forward to being involved in, seeing happen, you name it?
- [Clare] Personally, I'm very much engaged in looking at different possible governance models and also maybe how governance can be more anticipatory, so that it can catch up with technology and kind of offer, we will be developing kind of a model governance framework that will be unveiled at this conference in Slovenia in 2024. So I think that could be a really important model that could be used and adapted by countries around the world. So that's our goal.
- [Ryan] Very fascinating conversation, Clare. Thank you so much for being on. If our audience wants to learn more about just the organization itself and the areas that you're working in, is there a way to do that or what's the best way to follow up on that?
- [Clare] Sure. I mean they could contact me or contact our organization, unesco.org. We're working on quite a different ranges of artificial intelligence from looking at generative AI as used in education systems to, as I mentioned, the ethical implications of AI. We're working with judges on using AI in the judicial systems. So yeah, they could contact me, or I don't know if I should give my
- [Ryan] We'll have your LinkedIn and things like that, but yeah, we'll let people reach out that way. Well, Clare, thank you so much for taking the time. I know it's your evening over there, but I really appreciate you coming on. This is a great topic. A lot of education is needed around this topic. So you've done a fantastic job talking about these topics. So really appreciate it and excited to get this out to our audience.
- [Clare] Thank you. Thank you for having me.
Special Guest

Hosted By
AFA
AI For All
Special Guest
Clare Stark
- UN Coordination Officer, UNESCO
Hosted By
AFA
AI For All
Subscribe to Our Podcast
YouTube
Apple Podcasts
Google Podcasts
Spotify
Amazon Music
Overcast