Bias Exists in Our Technology
About This Episode
Communications intern Bethelly Jean-Louis and Dr. Ruha Benjamin, esteemed author/professor of African American studies at Princeton University talk with Michelle about discrimination and bias within technology.
Read Transcript
Michelle BB 00:00:07 Welcome to the edge, a Skillsoft podcast for learners and leaders alike to engage in thought provoking conversations and open dialogue on the topic of learning and growth in the workplace. This is an incredibly special episode because not only do we have an amazing guest whom I'll get to in a moment, but today I am joined by a colleague who will co-host with me. So let me introduce her first. Joining me is Beth Elijah on the we a graduating senior at Emerson college in Boston and a communications intern here at Skillsoft. And Beth brings her skills as a storyteller to bear along with her passion for and championing of diversity equity and inclusion matters. So, Beth, thank you so much for participating in today's podcast. Thank you
Beth 00:00:53 So much, Michelle, for having me. This is such a great opportunity. So beyond excited. Thank you so much again.
Michelle BB 00:01:01 Would you like to then take a moment and introduce our guests?
Beth 00:01:05 Yes, absolutely. It is such an honor to introduce Dr. Rahab Benjamin associate professor of African-American studies at Princeton university and founder of Ida B Wells, just data lab, which is focused on rethinking and retooling data for justice, Dr. Benjamin. It is an honor and a privilege to have you here today. Thank you so much for joining.
Dr. Ruha Benjamin 00:01:28 Thank you so much for inviting me. I'm thrilled to be in conversation with you today.
Michelle BB 00:01:33 We're grateful to have you, you know, um, why don't we dig right in as the world around us is increasingly shaped by technology. Technology is deeply shaped by the humans who create it, right? Which means that as we build new systems to make the world a better place, we are also embedding our own biases and thinking into what we're building. And as we rely more and more on technologies like artificial intelligence, there is a real need to understand, recognize and take action to prevent these biases that can and do discriminate. Um, Dr. Benjamin look, I was the former CMO of IBM Watson. So I am thrilled to have the opportunity to speak with you. And I first have to congratulate you on winning the 20, 20 Oliver Chromal Cox book award for your latest book, race after technology abolitionists tools for the new Jim code. I'm about halfway through it. And I love it. It really addresses and draws awareness to the biases, racial gendered, socioeconomic that exists in our technologies. And so perhaps to start, could you tell us more about what inspired this writing and how you use the book to explore the intersection of technology bias and race?
Dr. Ruha Benjamin 00:02:42 Absolutely. And thanks so much for digging into the book, Michelle, I really appreciate it. So like any question of origins, there's different ways that I could tell that origin story there's a short version and a longer version. And so I will just say for the longer version, rather than taking you down rabbit hole, that, um, you know, as an undergrad student, I was trained in sociology, but within the field of sociology, my interest in specialty was around the sociology of medicine and science and technology. And so I was interested in the social dimensions of that kind of plucking that bubble that we put science and medicine and technology and, and bringing it back down to earth in terms of the day-to-day social relations that create it. So I always had this interest and I was drawn to this particular, that dynamic between society and science, but in terms of the race after technology, the specific focus on algorithms and machine learning and AI, I noticed a few years ago, the proliferation of headlines and hot takes in the news about so-called racist and sexist robots.
Dr. Ruha Benjamin 00:03:49 And so there were a number of headlines that were drawing attention to the way that automation more broadly, where it was reinforcing racial and gender inequalities, and the way that the headlines put it in terms of robots was a kind of shorthand, a catchy shorthand to talk about this phenomenon. And so, as I was watching this sort of unroll unravel in the popular media, I was really interested in bringing the much deeper intellectual engagement with these issues into the conversation so that we could collectively wrestle with the fact that the issues that are emerging now around AI and machine learning are not brand new. They have a much longer trajectory. That's not simply about the technology, but about the society in which the technology has developed. And so in writing this book, it was my goal to bring into conversation that the longer intellectual engagement with the popular sort of concerns and write it in a way that people who don't necessarily feel like they have a say in high tech matters that they would feel that they do have a role to play in terms of democratizing this conversation.
Michelle BB 00:04:58 That's great. Beth, do you want to step in, cause I know you've got a couple of questions too. Yes,
Beth 00:05:03 Yes. Um, absolutely. Dr. Benjamin technology is everywhere and it's not just an isolated, you know, sectors has intersection of different multiple, you know, areas in our lives. You know, it's embedded in every facet. Um, we rely on it for everything from work, school responsibilities, no social reasons. And now tele-health doctor's appointments. We assume that technology is designed for good to make it simpler, to complete our everyday tasks. Yet increasingly we are learning that technology whether intentional or not is influencing human thought and action, techno determinism or a technology determinism that's also called, is a theory questioning how technology is a driving force behind culture and a society. Why is techno determinism theory such an important piece of the broader conversation around understanding bias
Dr. Ruha Benjamin 00:05:56 And technology? Thank you for that question, Beth. Um, I'll start by saying there are two main conversations or stories that we often tell and we're told about technology. The one is the idea that the technology is going to take all the jobs it's going to automate everything. It's going to remove human agency. It's what we might think of as the techno dystopian story. And that's the one that really Hollywood likes to tell us in terms of Terminator and matrix, we're all going to get plugged in as batteries. The other story is the techno utopian version. This idea that you articulated in the question that the technology or the robots are going to save us, they're going to make everything more efficient and equitable. And that story is the one that Silicon valley sells us. And so while these seem like opposing stories, the idea that technology is going to harm us or help us, they seem like opposing stories.
Dr. Ruha Benjamin 00:06:53 When you peel back the screen and you look underneath the story, you see a shared logic. That is that technology is in the driver's seat and we are either helped or harmed by it that we are determined by it. That's where that concept techno determinism comes in because it's a false understanding about our relationship to technology that we just feel the effects, but human beings aren't really in the driver's seat. And there's a lot of people that sell that story, whether it's the negative or the positive version. And so by naming it techno determinism, we can begin to reclaim our collective power. We can begin to question, why is it that a small sliver of humanity is currently doing all of the designing of our future? Shouldn't more people have a say, why are only certain things, the focus of technological innovation and other things that may be affect many more people never get invested in. And so by calling it techno determinism, we can begin to actually recoup our collective power to ensure that all of the things that we're creating actually reflect our collective values rather than just the profit value of those who seem to maximize their own benefit from technology. You know,
Michelle BB 00:08:07 Interesting. I, um, I just wrote down, are we giving far too much credit to the technology itself and not necessarily the man behind the curtain? Right. Um, because we know there is one and I read an article that you published in science last October that highlighted the racial disparity that exists when we have these predictive tools really aiding in decision-making. And I think the example you cited came from a study that found that an algorithm that predicted the health scores of individuals disproportionately disadvantaged black people, because the data and algorithms that calculated the scores used predictive cost of care as a primary input. And unfortunately we know that less is spent on the cost of care for black Americans and therefore the scores contained bias and this sort of bias, I think is better put it in terms of the, the technologies that we think are here for good. They're not limited to healthcare. But in fact, when we look at the predictive tools like artificial intelligence, we're looking at things, everything from they're evaluating mortgage loan applications, they're screening candidates for employment opportunities, they are using facial recognition. I think your, your book for beauty contests even. And so we've actually in a lot of ways, automated racial discrimination. So how do we even begin to dismantle this and take, I think, as you say, a more socially conscious approach to tech development and AI in particular.
Dr. Ruha Benjamin 00:09:23 Yeah. I mean, this idea that we're going to outsource all of these really important human decisions, whether it's decisions in healthcare or education or criminal punishment system across the board, the idea that, um, technology is going to save us from our own biases, what's kind of ironic is that the outsourcing of decisions has gone hand in hand with the somewhat growing awareness about human biases and just the, all the language around implicit biases. So it's like we're awakening to that phenomenon. And then we're saying, well, we can't deal with that directly. Let's just let technology make the decisions that we assume will be better than us. But to do that, we have to assume that technology grows on trees, that human beings don't actually encode their own worldviews, their own sort of his patterns. And so that healthcare example is a great one, because the only way you could presume that this automated decision system was going to bypass biases is if you were not privy to the long history of racial discrimination in the healthcare system, that's shaped the data that's being used to train the system.
Dr. Ruha Benjamin 00:10:31 And so in part of, for a very first step is to understand when it comes to something like deep learning that computational depth without social and historical depth is really superficial learning. It's going to reproduce the status quo. And so you need people on your team who as much as you need the people who have the technical skills, you need people who have the sociological, the historical, the understanding of the economy, the politics, because all of those are embedded in the data and embedded in the models and the choices of what variables, for example, cost of care. Um, any sociologist of healthcare could tell you that, about that spending pattern and how that would likely re uh, produce this really adverse outcome. And yet the only reason we know about it is this particular company allowed researchers to open up the black box of the algorithm, which isn't typically possible with a proprietary system.
Dr. Ruha Benjamin 00:11:26 So I think the first step is really to understand the range of knowledge that's needed in order to build socio-technical systems. And I say socio-technical because no system is purely technical. There's a social context in which it's designed in which it's going to take shape. And so the more we can understand the relationship between the social and technical and understand, we need a broad disciplinary range around the table, not to build the system, but to ask the questions that we expect the system to solve and to, and to address. So it's at that point of asking the questions that we want technology to address that is this a ground zero for actually building an equity injustice into, to, into tech design.
Michelle BB 00:12:10 That's really interesting. And I want to touch on something, um, because I think that there's this sort of moral superiority. I think you've been right that the fact that because we are letting technology make some of these decisions, we're making the assumption that the technology is making the right decision. And in your book, you explore this existence of engineered inequity and default discrimination, and it highlights that there are real world tangible consequences of these bias technologies. And I would, I would probably say that even with all of the headlines that we've seen out there, the impact is still really unrecognized and people don't necessarily understand the harm they are doing as they continue to outsource and automate this decision making. And I guess the question is how do we drive greater awareness and how do we, how do you, you know, as somebody who may be affected by some of these decisions become a better advocate for yourself?
Dr. Ruha Benjamin 00:13:09 Yeah. I mean, I think that that's really the million dollar question. And I do think it's particularly difficult when we think about the, the response at the individual level. It's hard for us as individuals to take on this magnet, you know, this huge issue. And so that's why so many of the kind of interventions that I try to map in the book and in my own work is really getting us to think about how we have to build collective power, how we have to organize what has to change, change for me and my own sandboxes in terms of pedagogy, in terms of the training of people going into stem. So that is where I'm seeding the different kinds of frameworks so that you have people around the table who are like, well, no, that's probably not a good idea because X, Y, and Z, that I learned when I was in undergrad or in grad school.
Dr. Ruha Benjamin 00:13:56 So education and rethinking how we train people is one, one, just one pillar, one ground zero. Also, we need to think in terms of literal advocacy, in terms of litigation over the last few years have been some really important class action lawsuits against, um, different entities, government, public, and private that have instituted, um, automatic decision systems that are really harmed people in Michigan. When it came to, um, a particular system called Midas there that was like flagging people for unemployment fraud, and it was wrong. And over 90% of the cases, there's a housing case in DC in which a housing developers were using Facebook's targeted ad, um, uh, program in order to exclude elderly people. And so litigation is one other area. And then there's a lot of organizing happening among technical workers, people in the tech workforce, within their own companies to build accountability, to create red lines.
Dr. Ruha Benjamin 00:14:57 Like we will not build that. We will not build a surveillance system for ice. That's going to snatch children away from their families, for example, like the kind of moral responsibility and ethical framework from within the industry. But also I think most importantly, we have to completely rethink the ecosystem in which technology is being designed. That is the regulatory ecosystem right now. It's kind of wild, wild west, and that that's many companies would like it to stay that way where there's not strong public accountability. And I think we've seen over the last few years, there's been some pushback, um, important pushback around that. So I think all of us have to, I think I would sort of summarize it by saying, we need to stop thinking of ourselves as users, users of technology. And rather as stewards thinking about what is the public relationship with tech design and creating technology in the public interest?
Michelle BB 00:15:53 You know, I think that makes a lot of sense and just for our listeners, just so they know, um, technology is, is fallible, right? And we're experiencing that right now. My cohost Beth unfortunately dropped off. And so I'm going to take on a couple of her questions and we'll hope that she will join us again. Um, but she actually had a couple of questions Dr. Benjamin, that I feel are important to ask, because these are things that are, um, these are the questions that she designed. And I think that they're really, um, you know, they're pretty powerful. So her question was in the wake of joy, George Floyd's tragic murder and the monumental cultural awakening that we've seen, that's followed. There has been an immense focus on social justice. As people around the world seek out, you know, they want to become better allies. They want to use their privilege to support people of color. Um, so to address the issues of systemic racism, discrimination, and bias in a more productive way, a way that would hopefully lead to long-term enduring outcomes, you know, these biases in the technology, they have to be eliminated and we all have to stand up and we all have a role to play. So her question though, is whose responsibility is it to take that first step and, and what are some realistic goals for doing so,
Dr. Ruha Benjamin 00:17:07 Yeah, I love that question. Any question that's about what do we do next, whose responsibility? And the fact of the matter is the problems that we're up against have been an, are continuously produced through so many different avenues. They happen in top-down form through our policies and laws. They happen bottom up in terms of our cultural practices and norms. They come out of sideways and our educational system, healthcare, all our employment sector. So the fact of the matter is precisely because the problem is everywhere and it's so complex. For some people, they just throw their hands up and they're like, well, that's too big. We can't do anything, but actually mapping it in that way, shows us that there are so many different avenues and there's responsibility to go all the way around. And everyone has a way which they can begin to address it in their own backyard.
Dr. Ruha Benjamin 00:18:00 Thinking about what is your sphere of influence, even if your sphere of influence, which I think happens to be a really important starting point is as a stay at home parent, and you spend a few hours every day at the local, um, you know, the local park, not during COVID, obviously, but in normal times, like that is a ground zero where we're seeding particular frameworks. We're teaching young people how to see one another and interact with one another. And we are either maintaining the status quo by letting them just inhale the toxic frameworks that exist that come at them from their cartoons through their textbooks, or we're constantly deprogramming and trying to actually put in frameworks that are, are robust and actually maintain a sort of project equitable and just visions of the world. And so the point is that there are so many different starting points for students in particular, what we're seeing.
Dr. Ruha Benjamin 00:18:57 And I believe Beth is a student at Emerson. And so students are actually pushing back and understanding that they're not simply consumers of education. If your program, let's say you're in an engineering program or computer science program, if you don't feel that that program is adequately training you to intervene responsibly as a tech practitioner in the world, in terms of what is considered essential knowledge before you're allowed to graduate, what's thrown in, in a tokenistic way, in terms of one little ethics module or one little guest speaker, or a much better framework, which is really integrating a deep understanding of the social historical dimensions of technology in your training. You are not simply a consumer of what the university is handing out, but you can actually work with other students to actually raise awareness. And I've seen this in the context of medical schools where medical students have done this in terms of integrating anti-racist curriculum and medical schools. And my hope is just thinking about students who might be listening. What does that look like in terms of computer science programs or engineering programs? So that's just one avenue based on where you happen to be in your stage of life or where you happen to be in the world to think about how is the status quo, either maintaining things as they are, or potentially subverting them and actually transforming them. And we all need to ask that of ourselves and figure out who we need to connect with to make that happen. It almost,
Michelle BB 00:20:27 It sounds like, you know, as part of our stem programs, we really need to ensure that there is an ethics component if there isn't already. And I'm sure that in some programs there are, but it seems to me that that ethical element is important, you know, because at the end of the day decisions that are made here by these systems, by these algorithms, by these predictive technologies are in fact about people and can have massive repercussions, but, you know, Beth had another question and I, you know, I love this because I liked, you know, what's so amazing about our interns and younger people is that despite everything we're facing, they are so incredibly optimistic and they want to change the world. And so one of her questions, I just loved, she wanted to hear some examples of ethical and positive applications of technology, you know, are there, are there instances where we see either artificial intelligence or machine learning in some way, actually improving outcomes for underserved populations?
Dr. Ruha Benjamin 00:21:35 Thank you for that. And it actually ties to your sort of call for thinking about ethics and education. Um, and, and because one of the things that we have to sort of wrestle with is how, um, ethics as a kind of individual model of people doing the right thing. It is important, but it's not sufficient because we can't rest this entire infrastructure on a model of, um, um, kind of morality at the individual level. We need safeguards that even when people don't want to act ethically that we have, uh, outcomes that are not harmful. And so, um, so, so that's partly why I like to frame the conversation around technology in terms of power and public, sort of the public good. And so what does it mean to build technology for the public? Good, not just ethically, but really in a proactive way. How do we actually make sure that this is helping people?
Dr. Ruha Benjamin 00:22:33 And so some of the examples that I really like is when the digital tool or the data is being used to expose certain kinds of harms or inequalities that people are not often willing to face. And once that exposure happens, putting that data and that knowledge in the hands of community organizations, who've been working on data justice well before these tools came along. And so one example I love is the anti-eviction mapping project, which happens to be extremely relevant right now, as thousands of people around the country are right. As we are having this conversation being pulled out of their homes and evicted during the pandemic, I just saw, I couldn't even watch the whole thing, but, uh, a clip of some families in Houston being pulled out of their home, the kids, the grandparents, everyone, because they couldn't pay rent. And so that this eviction crisis existed and homelessness crisis has existed well before the pandemic, but it's being amplified and deepen now.
Dr. Ruha Benjamin 00:23:32 But the good news is there's several organizations that have sort of committed to bringing this to light and using data justice tools, digital tools, to map, and to end to project the problem, not on individuals who may be experiencing economic uncertainty, but onto landlords, onto housing developers. And so for Beth, the key there is that the tool, the data that you're producing has to be directed at those who are creating the problems that people are, are suffering from too many of our data, predictive tools or assessment tools, automatic decision systems, they're trained on the most vulnerable in our society. And they're trying to predict whether someone's going to, um, you know, do, uh, follow parole or not. They're trained on whether some students are high risk or not. And so, so often even, and those are often framed as do Gooding things too. Like we want to make this system better, but it's pointed in the wrong direction. So as a first step, like I said, going back to the questions we're asking, we really need to think about who we're collecting data on and to what ends who's going to use this data. And what we really need now is data that can be used by social movements to advance housing, justice, advance, health, justice, and vans, prison abolition. And so if the tools aren't doing that, then they're likely just deepening the problem or delaying real work on these problems.
Michelle BB 00:25:07 So if you'll indulge me, I would like to just take a moment and I have to give a shout out to my mentor, Caroline Taylor. Um, she was, uh, her, her personal passion, um, was around helping stop the traffic, right? People shouldn't be bought and sold and really addressing the human trafficking issue that plagues the world. And so one of the things that, and I think this is one of the best applications of big data that I've seen, but when you bring all of that data to bear, um, this organization stopped, the traffic has been able to disrupt, combat and prevent in a lot of cases, the global issues of home, uh, of human trafficking. And I think that, you know, when I, when I listened to what you said, this is where and how we should be applying technology for good, where, where this kind of data can be used, um, in, in support of some of the biggest challenges that we're facing in this, in this country and in this world.
Dr. Ruha Benjamin 00:26:05 Thank you for sharing that example. I want to look them up
Michelle BB 00:26:09 The traffic with a K yes. I'm a huge fan. So, um, just as we wrap this discussion, I'd love to get some parting thoughts for you because, you know, as a society and certainly for those people who are out there now, developing these algorithms, working on these predictive tools, um, getting excited about the possibility and the potential of artificial intelligence, um, what, what can we do or, or probably better is what must we do, uh, going forward to rethink and reshape our relationship with technology in some ways I want to say we got to take back, um, the ownership. Um, but how do we reshape that relationship that, that we have, um, to make better decisions? Yeah,
Dr. Ruha Benjamin 00:26:52 Absolutely. I would really encourage all your listeners to check out. Um, one of my colleagues, um, framework called the design justice framework and, and, um, they've, they've outlined a number of design justice principles that I think should be starting points for any, uh, any team, any technology team, any design team to think through what is the social context that we're building, whatever, whatever product, whatever system in. And the fact of the matter is the, the, the kind of study of that and the internalization of that runs counter to the race to market, and the idea that we have to beat our competitors out, and we have to get the next, you know, 2.6, seven, five version of whatever. Like the market logic actually runs counter to a real, um, rigorous, um, reflection and incorporation of design justice into any kind of process. And so studying that framework, I think is really important.
Dr. Ruha Benjamin 00:27:57 The second resource that I would encourage everyone to check out and perhaps have a sort of reading group around is called the advancing racial literacy in tech handbook. You can download it for free. It was designed, uh, designed by the, um, data and society research Institute in New York city. And the goal of this handbook is threefold it's to develop an intellectual understanding of how structural racism operates in social media platforms and in technology is not yet developed. So it provides a kind of intellectual understanding, but it also goes deeper and it gets the people who are sort of engaging with this resource, thinking about the emotional intelligence that's needed in order to address organizational culture and priorities that often sort of block people, however, well-meaning their, their desire to actually build more equitable tools. And the last thing is it's really a call for, um, concrete actions to support communities of color and other marginalized communities. So it's a, it's a short handbook resource just to get started. But I think coupled with the design justice framework, it's really important that we don't just rely on our common sense or our own sense of Wieden. We mean to do good. So then we assume the outcomes of whatever we create are good. We really need to become as serious and rigorous about the social relations, that shape technology as we are about the actual code and programming and technical literacy that we expect in our team.
Michelle BB 00:29:29 Thank you so much for that. And I would also add, but probably another great resource is in fact, your book race after technology abolitionists tools for the new Jim code, I do encourage people to read it is. I mean, again, I'm halfway through, I'm going to keep going. It's amazing so far. And for our leaders who want to go even deeper, there's yet another avenue, um, because Dr. Benjamin will be keynoting our understanding bias and data boot camp later this month. And what an amazing opportunity this bootcamp is designed to help attendees understand the bias that exists in technology, identify the unintended impact that unconscious bias can have. And most importantly, take appropriate action to eliminate. And you can learn more about the bootcamp or register at www.skillsoft.com/boot camps. You know, I'm on behalf of Beth and I know she's just she's. So she's probably just for Lauren right now that she wasn't able to finish, but I know I do want you to know that a lot of this content and thought really came from her and she was so instrumental in bringing this to bear. Um, but I personally would like to offer my thanks as well. Looking forward to the bootcamp, by the way, it's September 29th and 30th, please pick up a copy of Dr. Benjamin's book and thank you all for tuning into this. And every episode as we unleash our edge together
Dr. Ruha Benjamin 00:30:50 For having me talk to you all soon at the bootcamp,
About Our Guest
Dr. Ruha Benjamin is a professor of African American studies at Princeton University and author of Race After Technology: Abolitionist Tools for the New Jim Code. This book examines the relationship between machine bias and systemic racism, analyzing specific cases of “discriminatory design” and offering tools for a socially-conscious approach to tech development
About Our Host
As Chief Marketing Officer, Michelle leads a global marketing organization, focused on transforming today’s workforce for tomorrow’s economy. Since joining the company, she has been responsible for Skillsoft’s global marketing strategy, which includes generating awareness, driving preference, and building affinity for Skillsoft. Additionally – and perhaps most importantly – Michelle serves as the company's brand evangelist, helping to build a vibrant community of passionate learners.
With more than 25 years of marketing, branding, and strategy experience, Michelle has made it her personal mission to support the advancement of women in business. Prior to Skillsoft, she served as Chief Marketing Officer of IBM Watson, where she was instrumental in developing the first “Women Leaders in AI” program, which honors women who put AI to work across industries and around the globe. She also served as the global head of marketing for The Weather Company, an IBM Business, helping companies understand how to anticipate, plan for, and ultimately make better decisions – with greater confidence – in the face of weather.
Michelle is a prolific speaker on a range of topics, including the war for talent, digital transformation, and marketing in a post-pandemic world. She covers these topics and more as the host of Skillsoft's podcast, The Edge, now in its second season. She has authored countless papers covering a range of business and marketing topics, was at the center of Skillsoft’s leadership role in DEI through free “Leadercamps,” and has taught two Percipio courses on the Pink Pandemic and Public Speaking.
Michelle is also a founding member of CMO Huddles, a group dedicated to bringing together and empowering highly effective B2B CMOs to share, care, and dare each other to greatness. Michelle holds a Master’s degree from Simmons University and sits on the pro side of the Oxford comma debate.