The Algorithm Isn't Perfect: 4 Best Practices for Combating Bias in Generative AI
Before businesses can reap the rewards of advanced AI, they must wrestle with the ways it reproduces harmful human biases.
For some organizations, generative AI (GenAI) tools like ChatGPT and DALL-E represent exciting new possibilities. In a recent report from McKinsey, a third of respondents said their organizations use GenAI regularly, and 40 percent said they plan to increase investments in the technology.
But as those numbers make clear, not everyone is on board with GenAI just yet. Major brands like Apple, Verizon, and JPMorgan Chase have restricted GenAI's use or banned it outright. Many of these businesses are hesitating because of the thorny questions surrounding ownership of AI-generated content, the reliability of AI assertions, and — our topic today — the potential for bias.
Shortly after ChatGPT's public debut, academics raised alarms about the chatbot's penchant for producing biased outputs, like some dubious "algorithms" that determined only white and Asian men make good scientists. A Bloomberg investigation found that the AI image-generator Stable Diffusion was more likely to represent women and people of color as working low-paying jobs.
While OpenAI and Stability AI — the organizations behind ChatGPT and Stable Diffusion, respectively — have responded by redoubling their efforts to strike bias from their models, discriminatory AIs aren't exactly a new problem. Back in 2018, for example, Amazon discontinued an AI hiring tool because it gave preferential treatment to male candidates.
This isn't to say GenAI is all bad. Indeed, AI could add as much as $4.4 trillion to the world economy by supercharging productivity. But if organizations are to reap the benefits of GenAI, they'll need to take a strategic approach to managing biased outputs and other risks.
The first step is understanding why AI can discriminate in the first place.
How an AI Becomes Biased
In the popular imagination, the term "artificial intelligence" conjures images of characters like Star Trek's Data, a sentient being of pure rationality. Real-life AIs are much less fantastical. Simply put, they're complex algorithms trained on massive amounts of data. They don't "think" so much as programmatically predict what they should do.
Take ChatGPT, for example. This large language model is trained on a corpus of more than 300 billion words, resulting in what is essentially the world's most sophisticated autocomplete system. When a user prompts ChatGPT to generate some text, all the chatbot does is guess the words it should string together based on all the data it has ingested. Because ChatGPT has been fed so much text, it's strikingly good at guessing correctly.
All the data used to train AIs has to come from somewhere, and by and large, it comes from people: the books we've written, the blogs we've published, the tweets we've posted. People, of course, aren't perfect. We have biases, conscious and unconscious, and these biases pop up in the things we write. When our biased writings get fed to the AI, it picks up those biases.
The AIs aren't biased per se. Instead, the bias appears in the data used to train the AIs. The simple solution is to stop using biased data, right?
AI developers are hard at work on that mission, but that's easier said than done. Given the sheer quantities of data the AIs require, it's easy for biased content to sneak in. Furthermore, we humans can be pretty bad at spotting our own unconscious biases, which means we may not always notice the biases in our AI training data.
Read Next: The Jobs A.I. Can and Cannot Replace (and Why You Shouldn't Worry)
Subscribe to the Skillsoft Blog
We will email when we make a new post in your interest area.
Why Biased AI Is Bad for Business
By now, the adverse effects of bias on organizational performance are well established. Employee engagement falls, cultures become toxic while innovation stalls, and profitability plummets. Candidates and consumers avoid doing business with biased companies. Finally, discrimination can have legal ramifications, like lawsuits, fines, and regulatory actions.
When bias appears in AI output, it poses many of the same problems. It may even undermine the very purpose the technology serves. Take the story of Amazon's AI hiring algorithm mentioned earlier. In that instance, a tool designed to help the business hire stronger candidates actually made the talent pool weaker by needlessly discounting women.
Because AI is often seen as more objective than a human counterpart, people may implicitly trust its decisions and the content it produces. The AI receives less scrutiny than a human employee would, making it easier for biases to fester.
Here's a thought experiment: When a human author writes a blog post, it usually passes through at least one additional set of eyes — an editor, a colleague — before it goes live. A blog produced by ChatGPT, on the other hand, may only be quickly skimmed by the person who prompted it. If the human author's blog contains any unconscious bias, other human readers can catch that bias before the post is published. If ChatGPT's post contains bias, the prompter may not notice before it's too late.
AIs can also suffer from a snowball effect, where previous biased output is fed back into the AI as new training data. This reinforces and intensifies the bias in the algorithm, making the problem worse over time.
As government agencies like the Equal Employment Opportunity Commission (EEOC) and the Federal Trade Commission (FTC) ramp up investigations into business uses of AI, companies need to be especially careful about bias. Blaming it on the algorithm won't get your business off the hook.
Best Practices for Minimizing Bias in Generative AI
When it comes to ethical matters like bias, businesses often look to the law for guidance. Unfortunately, companies can't do that with GenAI just yet. The technology is so new that the regulators are still playing catch-up.
The European Union passed the Artificial Intelligence Act in June 2023, but the finer details are still being ironed out. Lawmakers and citizens alike are hungry for regulation in the U.S., but industry experts are skeptical of Congress's ability to take action any time soon.
This leaves businesses in a difficult position. If they wait for the authorities to issue concrete guidelines, their competitors may get a head start on leveraging GenAI. If they forge ahead with AI in the absence of official rulings, they risk running afoul of regulators as the landscape evolves.
The good news is that there are ways to minimizethe risks of bias in generative AI. By following these best practices, businesses can avail themselves of AI's benefits while avoiding many of its pitfalls.
1. Ensure That People Truly Understand AI and Its Use Cases
Part of the problem is that, as outlined above, many people don't understand how AI works. As a result, they're overconfident about what it can do and don't know the potential problems they must look out for.
On the other hand, if employees know how AI works and its common shortcomings, they can be more careful about how they use it and more vigilant in checking for bias when they do.
In general, training can be a powerful way to nurture a culture of compliance, and that's no different when it comes to AI. Comprehensive training can help employees master core concepts like when AI is appropriate, how to get the best results, and how to check for bias and other issues. Because AI technology will continue advancing rapidly for the foreseeable future, it's best if employees have access to ongoing training.
2. Do Your Due Diligence
When a process is handled manually, a certain amount of due diligence is built right in. Take content creation as an example: If you're conducting research and writing an article yourself, you know whether you're violating copyright or making things up.
AI can automate these organizational processes, unleashing new levels of productivity. But a human still needs to be there to ensure the output is accurate, original, and unbiased.
Organizations should establish formal processes to ensure that all AI output — from marketing collateral to internal spreadsheets — is reviewed by the right people. The process should involve relevant subject matter experts from across the organization and create an audit trail to track outcomes.
Taking a step back, organizations also need due diligence when deciding how to use GenAI. Much like the process of reviewing AI output, companies need a process to review and approve GenAI use cases before employees implement the technology in their day-to-day roles. That way, the business can ensure that AI is only used appropriately and people aren't delegating tasks that humans should handle.
3. Audit Your AI Model
Since bias sneaks in at the level of the training data, organizations need to periodically audit their AI models to ensure they only ingest high-quality data. Audit teams should involve a mix of stakeholders — from IT leaders to compliance officers to everyday users — to capture a range of perspectives.
If the AI is developed in-house, audits should also evaluate the company's adherence to any laws, like the GDPR and HIPAA, that may apply to the training data.
If an external vendor maintains the AI, the audit team must work closely with that vendor to gain the necessary insight into the model. Some AI vendors are hesitant to let customers look under the hood. To effectively fight bias, organizations may want to partner only with vendors committed to transparency.
4. Create a Generative AI policy
Organizations can develop and disseminate these best practices by drafting formal generative AI policies. These policies define how employees can use AI within the organization's unique context. A robust GenAI policy should cover ethical usage, bias and fairness standards, compliance requirements, and other critical guidelines and guardrails.
Don't Shy Away From AI
While there remain many questions we, as a society, need to answer about generative AI, businesses can start using the technology today to fuel innovation and efficiency. The key is to implement AI with your eyes open — that is, to understand how AI works and the risks it brings. As long as business leaders and everyday users do their due diligence, bias can be minimized while benefits are maximized.
Interested in leveraging AI in your business? Skillsoft's new ChatGPT courses can help your employees master the technology while understanding its limitations.