Using AI For Good: The Ethical and Responsible Usage of Generative AI
About This Episode
Bill Gates once said that artificial intelligence is the most significant technological advancement since modern computers. And with the direction AI is going in, especially generative AI, he’s probably right. Now, there are a lot of people around the world asking questions about generative AI and how to use it ethically and responsibly, especially within their own businesses. So, rather than fearing AI and all of its capabilities, we need to understand it. More specifically, we need to understand how to use its power for growth and more importantly, how we can use AI for good.
We dive deeper into this topic on this episode of The Edge with artificial intelligence ethicist and edtech entrepreneur, Ben Roome. With his team, Ben helps companies from Facebook to Skillsoft develop their ethics capacity by implementing systems and processes that enable reliable ethical decision making across an organization. It’s important that we don’t just trust the technology, but also the people behind it. Join in as Ben and host Michelle Boockoff-Bajdek discuss how to use AI responsibly with fairness, accountability, and transparency.
Read Transcript
[00:00:07] Michelle BB: The views expressed by our guests are their own and do not necessarily reflect the views of Skillsoft. Welcome to The Edge, the Skillsoft podcast, where we share stories of the ways in which transformative learning can help organizations and their people grow together. I'm your host, Michelle Bieber. My pronouns are she, her and hers. Over the last few years, and with dozens of guests, the Edge has provided valuable insights into why skill building is essential to creating a sustainable workforce. And a lot of our discussions have centered on digital transformation. But still, this last year, with the mainstream emergence of generative AI, we are seeing something altogether different. In fact, we're at a watershed moment for technology right now. Artificial intelligence, and specifically gen AI, have leapt ahead, giving us capabilities that, frankly, seem like the stuff of science fiction. This is a sea change along the lines of the Internet and mobile phones. And if it hasn't already affected what you do every day, just wait. Because it will. And this is just the beginning. We haven't even scratched the surface when it comes to the sheer power and promise of generative AI. And you know, there is a saying, right? With great power comes great responsibility. Well, that's what we're going to talk about today responsibility and risk. And let me tell you, I think it's something we all need to hear. When Chat GPT was unveiled, most of us were astounded. Even Bill Gates, who has been somewhat skeptical about the future of generative AI, has said AI, like Chat GPT, is the most significant technological advance since modern computers. And personally, having worked at IBM as CMO of Watson, I'm fairly familiar with artificial intelligence. But when I first tried Chat GPT, I had two thoughts. The first was wow. And the second was whoa.
We've all heard the various predictions. AI will take away jobs. Humans will become obsolete. There will be no more truth to be found because we will no longer be able to trust what we read, what we see, what we hear. But when it comes to gen AI, what we need now is not fear mongering. We need understanding about how to use its power for growth by skilling and upskilling our workforce. We need to understand what new jobs can be created, industries that can be revolutionized. How we use this for good. The need is urgent for regulation. In fact, according to the White House, president Biden has just issued a landmark executive order that establishes new standards for AI safety and security. This is landmark indeed, but external regulations are not enough. Organizations must earn the trust of their stakeholders by using AI responsibly and ethically. And that means we must create, communicate and enforce clear, actionable AI policies based on our values and our principles and the guidelines we uphold so that our employees understand our expectations.
But how do we start? Well, that's where today's guest comes in. Ben room is an artificial intelligence ethicist and edtech entrepreneur and a co founder of Ethical Resolve. He and his team help companies from Facebook to Skillsoft develop their ethics capacity by implementing systems and processes that enable reliable ethical decision making across the organization. And as an edtech entrepreneur, ben runs the digital Credentialing platform badgeless.com. Ben, thank you so much for being here. Welcome to the Edge.
[00:03:49] Ben Roome: So glad to be here, michelle, when.
[00:03:51] Michelle BB: I tell you, we are so thrilled to have you. This topic is so incredibly hot. And I know that today's episode in particular is going to really resonate with a lot of people who are asking questions about generative AI and it's the ethical and responsible use of it. And personally, I've been doing a lot of reading around AI and ethics, and I happened across an article by David Deutsch, professor at Oxford, written back, like, in 2012, long before at least, I knew Gen AI was a reality. And he said, philosophy will be the key that unlocks artificial intelligence. Now, I know you're a doctor of philosophy yourself. Can you tell us a little bit about your journey from teaching philosophy to data analyst, learning officer? Now, AI ethicist, I have a feeling that they're all connected.
[00:04:41] Ben Roome: Yeah, they certainly are connected in a lot of different ways. And that quotation from David Deutsch is a great one. And my sense is that what he was getting at there was that it was going to take philosophers to provide an understanding of consciousness that would allow us to develop artificial general intelligence. And so back in 2012, a lot of times when people referred to AI, they were referring to what we now refer to as AGI. And of course, in the last ten years, so many different specific forms of artificial intelligence have emerged. And of course, Gen AI is just one of those.
So in answer to your question about how they're all connected, I actually, as an undergrad, was very interested in philosophy of mind and the study of consciousness.
But as I went through grad school and wrote my dissertation, my main interest ended up being in the philosophy of science, and in particular, the philosophy of measurement practices and the way that our scientific measurement practices produce our reality. So from there then, I got very interested in data and data science, and I started working in the field of AI ethics before it was even called AI ethics. Back then, we used to refer to it as big data ethics, which is a phrase you almost never hear these days because everyone just talks about AI at this point.
[00:06:02] Michelle BB: It's really interesting. And you talked about the specific forms of AI. Just another question for you.
Why is now such a watershed moment? Why this form of AI? And why are we seeing so much fear in this moment?
[00:06:24] Ben Roome: Yeah, there's a lot of things going on. I mean, certainly Gen AI is on everyone's radar now. Right?
Everyone who has a computer knows about Chat GPT at least, and a lot of us have at least played with it a little bit.
Also, in the last year, there's been a lot of discussion about existential risk and the idea that some future iteration of artificial general intelligence could actually be a threat to the human species.
Where I stand on that debate is that that is unlikely, but that we still need to be focusing our ethical practices on the way that we develop AI today, and that only through paying rigorous local attention to the way we develop AI today can we create the practices that we would need to ward off any larger threat to the human species.
[00:07:16] Michelle BB: So I want to come back to that for a minute because I think it's fascinating. At a recent Yale CEO Summit, 42% of CEOs surveyed, 42% of CEOs say AI has the potential to destroy humanity five to ten years from now. And dozens of AI industry leaders, including Sam Altman of OpenAI, have already signed a statement that says, and I quote, mitigating the risk of extinction from AI should be a global priority alongside other societal scale risks such as pandemics and nuclear war.
Do we really believe this? I mean, as an AI ethicist, help me understand. Is this really an existential threat of epic proportion in the way that some of these CEOs and experts are debating at the moment?
[00:08:01] Ben Roome: Yeah, this debate is definitely an interesting one, and I've delved pretty deep on it in the course of the last several months. And in a way, actually, my love of Sci-Fi has meant that I've been interested in this question probably since I was a kid. But ultimately, I think, obviously, if anything can pose an existential threat to the human species just using the precautionary principle, we should be aware of that. And looking at it right, it would be whistling past the graveyard to just throw caution to the wind and ignore this. But at the same time, I think the concern amongst a lot of the AI community, AI ethics community, and I agree with this position, is that when we spend too much time and resources talking about the existential risk, we often ignore the very serious negative impacts that AI is already causing. And so my position is that we have to address those impacts, and that through our work on those impacts and the mitigation of those risks, we can also simultaneously address the existential risk problem.
[00:09:07] Michelle BB: So I'm glad, because all I have is like, terminator in my head, and certainly I thrive on challenges, but we only have 45 minutes to save the world here, so we probably have to come back together and look, let's leave the existential risk to others for the moment. This idea that we are already feeling the impacts and maybe not spending enough time addressing them, what are the most serious risks from AI that organizations should be concerning themselves with right now?
[00:09:38] Ben Roome: Yeah, great question. So the way I think broadly about AI risk is through two lenses. The first one is the lens of fairness, accountability and transparency. Basically, the idea is that these are powerful systems and they have the potential to be powerfully unfair, right, to propagate the bias that already exists in the world and then magnify that unfairness as it gets used within our machine learning models. And so, in effect, what we need to do is focus on the development of our capacity to ensure that these tools treat people fairly and basically accountability and transparency are means to that end. The other lens that I use to think about this is through rights impact. So we're of course all aware of the UN Universal Declaration of Human Rights and there are far more human rights and animal rights and plant rights as well that are recognizable beyond just what's stated in the UN Universal Declaration. And so when we think through stakeholders, right, groups of people or plants or animals and the way that these tools impact their rights, we can then better understand our ways to identify those rights impacts and improve them right. To both mitigate negative rights impacts and to enhance positive rights impacts, I think.
[00:11:03] Michelle BB: It'S really helpful to hear the risk areas broken down in this way in these categories. One of the questions I have is with all of these risks to consider how do we actually earn trust? How do we build trust in a system?
I think you've said trustworthiness equals culture plus capacity but in this case where we're talking about technology there's still a human element in terms of the inputs that this technology takes in. How do we ensure that we can trust not just the technology but the people behind the technology?
[00:11:43] Ben Roome: Yeah, I think people show their trustworthiness through commitments to culture and capacity. And so when my co founder Jake Metcalf and I say that trustworthiness is a function of culture and capacity, effectively what we're saying is that an organization makes a cultural commitment, right? A values commitment to ensure that their AI programs or their use of AI systems is treating people fairly or that it is having an overall positive rights impact. The capacity piece is actually the systems and processes that allow us to actualize those cultural commitments. So the way that I've put it is that basically culture without capacity is like making empty promises whereas capacity without culture is basically just a bunch of processes that we don't know why we're engaging in, right? And oftentimes when we've seen companies try to implement too many systems and processes without the big why behind it, what we find is that the people who work at those companies disengage because they feel like they're engaging in processes that don't matter. So in order to build trustworthiness right, the company has to really make both of these commitments a reality and what I've said in the past is that the learning and development executives and really all learning and development teams can function as a bridge between culture and capacity. Ultimately, culture and capacity are about organizational knowledge. And of course, that requires organizational learning, right. If an organization is going to make cultural commitments and I always think those are best done by talking to everybody at the company, not just the executive team, then those commitments need to be disseminated amongst everyone in the company. So everyone's on the same footing about what we've committed to and all the future. People that are getting onboarded have to know what the values commitments of the organization are. Similarly, with capacity, these systems and processes only work if they are done consistently. And that requires really reliable training for people to understand, look, this is the purpose of this process. Whether it's a governance practice, whether it's a review practice for an AI tool, the company has to be able to do this consistently in order to make good on its organizational commitments.
[00:14:04] Michelle BB: Well, you are certainly speaking my language when it comes to learning and development. I think this is great.
It's a reminder that building an ethical culture is not enough. We have to grow the capacity, to your point, to operationalize our values. So let's talk for a minute about how we actually get this done within our organization when we're looking to create an AI ethics strategy, a set of policies, programs, learning, training. Where do we start?
[00:14:34] Ben Roome: Ben yeah, as I was saying earlier, I think the best way to start is by getting on the same page about what your organization's values are, what are your ethical commitments when it comes to these technologies. And I think people think you might have to have a lot of technical knowledge in order to make those commitments. But I don't think that's right. I think what you need to do is broadly, as an organization, figure out what do you individually and as a group want to see, right. What is the world that you want to create collaboratively as you execute the mission of this organization. And so oftentimes, especially if the company is small enough, you can conduct a day long workshop and really understand get buy in from everyone. Maybe the executive team has already done some thought work. Maybe we've invited the rest of the team to do some thought work around what their commitments are. And then through this workshop we can really, as an organization, get on the same page about what we want to achieve. Then the next question is how do we operationalize these things? And so that often really comes down to a question of what AI systems are we going to use? Right. We've been talking earlier about the fact that AI is not just one technology, it's actually a broad suite, a multiplicity of technologies. And even within generative AI, there are so many different technologies right? There's image production, there's text production. There's so many different kinds of things that these technologies do. And so in order to create systems and processes and policies, we have to know exactly what tools we are using. And especially given the accessibility of all of these gen AI systems, there are so many that a company might be using now. And so we really need to get our ducks in a row and find out what is every team, what is every department using? And then we have to get a clear policy about how we are going to handle each one of these technologies. What are we going to do to ensure that these technologies treat everyone fairly and that we treat everyone fairly through our use of them and that the impacts of these technologies are positive?
[00:16:43] Michelle BB: So let's dive a little bit deeper into that because I think this is a really important point because there's so much room for bias, however unintentional, when it comes to this kind of technology. Natural language processing models, facial recognition, we could go on. How do we start to tackle this if we don't necessarily see it right?
[00:17:07] Ben Roome: There are a lot of different techniques for measuring fairness and actually to get philosophical again for a minute, there are a lot of different definitions of fairness. And so, for instance, the Disparate Impact Rule is a federally recognized rule. It's been around since the 70s. It's sometimes referred to as the four fifths rule. And so that is one broadly recognized measure of fairness. It is certainly not the only one and it does not work in a lot of cases where you would actually want to hold the use of your tool to a much higher standard of fairness. But the point is that you have to make a commitment and it has to be a measurable commitment, right? It has to be expressed mathematically and then you have to continuously measure the fairness impacts of that tool to ensure that it is meeting whatever commitments you've made for your organization.
[00:17:58] Michelle BB: I think this is so interesting and important for organizations that may be thinking of how to start using generative AI, but really haven't put some of the foundational work into place with respect to how we're going to use it fairly. What policies, what programs we need to have in order to ensure that our people are using it in an ethical and responsible way. As you start to think about where organizations need to be headed, what are the big things that you believe they need to be thinking of? As we move forward, as generative AI becomes even more commonplace, as more and more people adopt its usage and we start to see more and more work get done by these technologies, companies are.
[00:18:48] Ben Roome: Adopting generative AI technologies at such a rapid rate, and they are useful to basically every person in every department of the organization. So I think the things that companies need to be focusing on right out of the gate are first of all, truth and accuracy, right? We all know that chat bots like Chat GPT are prone to hallucinations. They will fabricate information. One of my favorite examples is a couple of months ago, an attorney used generative AI platform to write a legal brief and the platform fabricated case law. It made up case history and just passed it off as basically it's legal reasoning. And this attorney, rather than looking up to make sure that these cases were in fact real cases, just submitted the brief uncritically. This is exactly what you want to avoid. People are using chat bots to write blog posts, to do all kinds of different things. People use them to write RFPs as well in the sales department. And when they fabricate information, of course this can be really disastrous, right? We can perpetuate, misinformation and disinformation. It's a really serious problem. So always make sure that everything that is generated by a generative AI platform is properly fact checked before it gets disseminated.
The next broad issue is around copyright and intellectual property. So there's two facets of this. One is that a lot of these generative AI models were trained on public information. So for instance, image generation tools were often trained by feeding a whole bunch of images from the internet into the model. Well, of course, these images were produced by an artist at some point. And some of these image generation platforms are even able to do work in the style of a living artist. So for instance, if you ask a Gen AI platform to give you an image of a person in the style of Rembrandt, right, rembrandt has been dead for hundreds of years. He's not worried about the copyright at this point. But for living artists, they are still trying to make a living. And in effect, these models are stealing their intellectual property. And so companies need to make very sure that they are not taking the intellectual property of a living artist, whether that be a visual artist or a writer or any other kind of content producer. The other concern around intellectual property is what people put into chat bots when they are trying to author a prompt. There have been some worrisome cases of companies putting, or representatives of companies putting in trade secrets into the prompt in order to have them write an RFP for them. And then obviously that information gets passed to the company who created the chat bot. And in some cases, that can even be then produced as a response by the Gen AI platform at a future date, which obviously is a terrible outcome for your organization if you have trade secrets you're trying to keep secret. And now all of a sudden they've been made public by uncareful use.
There's also big privacy and security concerns, right? There are all kinds of personally identifiable information that people have access to marketing teams have access to customer records that they may want to put in in order to have chat GPT write a user archetype profile or something like that. But of course, sharing that information with a generative AI platform is really dangerous. You could accidentally provide people's private information where you shouldn't be doing it. And then the last piece, of course, is the kind of broader fairness and bias impacts that are pertinent to all AI systems. We need to think broadly about the way that the training data sets might contain bias. So for instance, there are a lot of image platforms that have if you ask them to show you an image of a doctor, it will show all men as doctors. This is kind of a prototypical example of how bias gets perpetuated within machine learning models. So that's a concern here as well.
[00:23:17] Michelle BB: So you said something at the beginning of this, by the way. It's fascinating, but really where I think generative AI differs is that it is incredibly accessible to just about everyone and anyone. If you can write, you can use generative AI, which is really not the case for all AI tech. I think that these points you make first, truth and accuracy, making sure that what you're using generative AI for is accurate. So check your work, people. Two copyright and intellectual property.
I recall the story of the Deepfake Drake, which was incredible. And now there's a question about whether or not the AI can actually be put up for a Grammy, which it can't, apparently. Intellectual property, everything is public. Right, and that's a great point. Whatever you put in is now in the public domain, privacy and security. I think we all need to be careful. I mean, we always have about our own data, but now is there greater risk as we start to adopt and use generative AI more? And then the last, this topic that we touched on earlier around fairness and bias impacts and are we making sure that we're mitigating bias as much as we can? And I think that these are absolutely critical when it comes to policy creation, when it comes to training and learning within organizations. We've got to make sure people understand these points before they're able to utilize the technology internally.
[00:24:51] Ben Roome: Yeah, exactly. I think when companies start to encounter these technologies, obviously the accessibility, as you say, is a major piece of this and they have just become more accessible than they've ever been in the past. And so this is why I'm constantly underscoring the importance of an AI ethics strategy for every company, even if they just use AI tools, let alone if they are actually building AI tools. That in order to make sure that the impacts of your use and development of AI systems are positive and that you don't incur the reputational and monetary risk of using these systems uncarefully or uncritically, you need to be aware of all the ways that AI is getting used within your organization. And there needs to be clear and specific policies for addressing all of those risks.
[00:25:40] Michelle BB: I think that's great guidance, but I have to ask you, maybe we'll turn you into a futurist, but when you think about what's next with respect to AI, I don't think we would have imagined even a year, two years ago where we'd be with generative AI. But what predictions do you have for where this technology is taking us? And any predictions for the things that are around the corner that we need to be aware of?
[00:26:13] Ben Roome: Yeah, that's a great question. I think my hope is that we will start to see the benefits of these technologies really accrue equally to everyone through good ethical development. So we're starting to see a lot more clear descriptions of the beneficial return on investment for engaging in AI ethics practices. And those include, of course, economic returns, your kind of standard model of return on investment. There are also reputational benefits and then there are also capacity benefits. Right? When you do AI ethics, you are building your capacity for risk assessment, your capacity for recognition of new opportunities, as well as your capacity for regulatory compliance. And so all of these benefits, I think, are going to start to accrue more broadly within organizations. And what I hope we see is ultimately an improvement of overall organizational culture in the development of more detailed AI ethics practices.
I think the next big technological question is maybe like how soon artificial General intelligence will be with us or maybe upon us.
There are a lot of different predictions around whether this might take decades, whether it's possible at all, or whether it might be possible in the next five to ten years. If that happens, we are going to see some very rapid change in this world. And so we are going to have to come together as a society and make sure that the way that these technologies get used impact humans, plants and animals positively. And so that's why I think, in a way, the concerns around AI ethics are not just the problems of ethicists, they are going to become global problems, just the way questions of climate or pandemic or global conflict are issues that affect all of us. And so we need to be aware of these issues and start thinking together about the world that we want to create collaboratively.
[00:28:26] Michelle BB: Ben, thank you. It's clear that your knowledge and passion on this topic, they were evident throughout. And I think you really helped us, I think, articulate some very complex ideas in a much easier to understand manner. So thank you for that. I also think, hopefully for our listeners, this has helped shed light on the ethical challenges posed by AI and the ways in which we as leaders need to be navigating them. But before we wrap up, I don't know if you've listened to this podcast but we do the same thing every single episode. And I have all of my guests answer three questions and they're the same ones again that I've used since we started in 2020. And the first is it's really a three parter what are you learning right now? Or what have you learned recently? The second question is, how are you applying what you've learned? And then third, what advice would you share with others? So what are you learning? How are you applying it? What advice would you give?
[00:29:32] Ben Roome: Yeah, I love those questions. So what I'll say is that what I've been most interested in is sort of the broader questions around organizational culture. I've been learning a lot about this because it touches on a lot of different interests of mine, both in AI ethics and in education technology.
And so what I'm seeing ultimately is that this question of culture and capacity doesn't just apply to AI ethics, but really applies to a broad variety of facets of organizational culture. And so what I am trying to do in order to apply this is basically to help companies think about the ways that they can broaden their overall organizational culture through the development of their AI ethics practices. And so the additional advice I would share with others is basically that it's never too early or too late to start doing this thought work around your values. I mean, in some sense, this is existentialist work. To be a philosopher again for a second, right? What do we care about what has meaning for us in the world? What gives the world meaning? What sort of world do we want to see and who do we want to be in that world? Asking that question, both specifically in terms of the way that we are using AI and then technology in general, but then B, how do we want to show up in the world as an organization and as individuals? And how can we apply that through our awareness and development of culture and capacity?
[00:31:13] Michelle BB: Thank you so much, Ben. I am so grateful that you came. Your insights and practical approach to AI and ethics, I think have given our audience a lot to think about. Perhaps some much needed reassurance that maybe AI won't take over the world, or maybe we will get smarter about how to use it in this world for good. Here at Skillsoft, we propel organizations and people to grow together through transformative learning experiences. And I hope you've enjoyed this episode of The Edge as much as I have. Be sure to tune in as we unleash our Edge together. Now, before I leave you, I'm going to share a little joke courtesy of our mutual friend Chat GPT.
And it's this what's gen AI's favorite game to play?
Risk. I'm Michelle Bieby. Until next time, keep learning, keep growing, stay safe.
Power Up Your Skills
Harness the power of persuasion skills through Skillsoft's immersive learning platform and learning content, with a free trial of Skillsoft Percipio.
About Our Guest
Ben is a co-founder and consultant at Ethical Resolve, specializing in the ethics of artificial intelligence and other data-dependent technologies. Ethical Resolve has helped large companies, startups and investors to grow ethical capacity, build trust and avert risk since 2014. The company focuses on identifying and mitigating potential negative impacts, developing ethical culture, and creating stable and reliable processes for ethical decision-making. Recently, Ben has been heavily focused on helping clients prepare for algorithmic regulation.
About Our Host
As Chief Marketing Officer, Michelle leads a global marketing organization, focused on transforming today’s workforce for tomorrow’s economy. Since joining the company, she has been responsible for Skillsoft’s global marketing strategy, which includes generating awareness, driving preference, and building affinity for Skillsoft. Additionally – and perhaps most importantly – Michelle serves as the company's brand evangelist, helping to build a vibrant community of passionate learners.
With more than 25 years of marketing, branding, and strategy experience, Michelle has made it her personal mission to support the advancement of women in business. Prior to Skillsoft, she served as Chief Marketing Officer of IBM Watson, where she was instrumental in developing the first “Women Leaders in AI” program, which honors women who put AI to work across industries and around the globe. She also served as the global head of marketing for The Weather Company, an IBM Business, helping companies understand how to anticipate, plan for, and ultimately make better decisions – with greater confidence – in the face of weather.
Michelle is a prolific speaker on a range of topics, including the war for talent, digital transformation, and marketing in a post-pandemic world. She covers these topics and more as the host of Skillsoft's podcast, The Edge, now in its second season. She has authored countless papers covering a range of business and marketing topics, was at the center of Skillsoft’s leadership role in DEI through free “Leadercamps,” and has taught two Percipio courses on the Pink Pandemic and Public Speaking.
Michelle is also a founding member of CMO Huddles, a group dedicated to bringing together and empowering highly effective B2B CMOs to share, care, and dare each other to greatness. Michelle holds a Master’s degree from Simmons University and sits on the pro side of the Oxford comma debate.