How to Write an AI Policy for Your Organization
Generative Artificial Intelligence (GenAI) is a part of the mainstream, and organizations everywhere are taking different approaches to adoption:
- Samsung banned the use of ChatGPT (an AI chatbot that uses natural language processing to create human-like dialogue) after employees inadvertently shared sensitive information with the tool. Apple, Amazon, JPMorgan Chase & Co., and other organizations have similar restrictions.
- Microsoft embraced ChatGPT technology, as did other companies – Expedia, Coca Cola, and Slack, to name a few – which are incorporating GenAI into their projects and encouraging the use of it to improve the way their employees work.
No matter where your organization stands on GenAI, it is imperative you create, communicate, and enforce a clear, actionable AI policy so employees understand your expectations of them with respect to the evolving technology.
The Importance of Creating a Generative AI Policy
A Generative AI policy helps organizations navigate the ethical, legal, reputational, and societal challenges associated with deploying AI systems that generate content. It ensures responsible and accountable use of AI technology while safeguarding the interests of users, customers, and the broader community.
Following are some factors you should consider when building out your policy:
- Ethics: GenAI technologies, such as language models, can create highly realistic and convincing content, including text, images, and videos. Without a policy in place, organizations may inadvertently create or propagate misleading or harmful information. A Gen AI policy helps ensure users follow a set of ethical guidelines, preventing the misuse of AI technology.
- Reputation and trust: Organizations using AI systems to generate content remain responsible for the quality and accuracy of said content. A GenAI policy helps establish standards and guidelines for maintaining the organization's reputation and the trust of its customers, users, and stakeholders. By ensuring that generated content is reliable and transparent, organizations can build trust with their audiences.
- Compliance with applicable laws: The use of GenAI technologies may have legal implications. For example, generating content infringing upon copyright, privacy, or other intellectual property rights could lead to legal consequences. A GenAI policy helps organizations understand and comply with relevant laws and regulations, reducing the risk of legal disputes and liabilities as it addresses legal issues such as infringement.
- Data privacy and security: Generative AI models often require large amounts of data to train effectively. An organization's GenAI policy should address data privacy and security concerns, ensuring that user data and other sensitive information are handled in a responsible and secure manner. This includes defining procedures for data anonymization, consent, access controls, and compliance with applicable data protection regulations.
- Bias and fairness: AI systems can inadvertently perpetuate or amplify biases present in the data used for training. A GenAI policy should include measures to identify and mitigate biases promoting fairness and inclusivity. It may involve incorporating diversity and representation considerations during training, regularly auditing models for biases, and implementing mechanisms to correct and prevent biased outputs.
- Social impact: Generative AI can have a significant impact on society, influencing public opinion, shaping narratives, and affecting decision-making processes. Having a GenAI policy allows organizations to consider the broader societal implications of their AI systems. This can involve engaging in public discourse, collaborating with external stakeholders, and taking responsibility for the social consequences of their AI-generated content.
Read Next: The Jobs A.I. Can and Cannot Replace (and Why You Shouldn't Worry)
Subscribe to the Skillsoft Blog
We will email when we make a new post in your interest area.
An Overview of Skillsoft’s AI Policy
Skillsoft is committed to keeping a responsible and sustainable generative AI policy for our team that is current, flexible, and clearly outlines our ongoing expectations with respect to the technology. We want the policy to grow with us and evolve as the technology evolves.
Our first priority? Ensuring that our organization is on the same page with respect to GenAI. That’s why we decided to include definitions of key terminology – for example, “AI hallucination” – at the start of the policy so we all approach the topic with the same foundation.
Second, we want to ensure proper guardrails for the adoption of generative AI technologies. And so, to gain access to our corporate instance of ChatGPT through Microsoft Azure enterprise generative AI, we require team members to go through a cross-functional approval process. Like other organizations around the world, we’re continuously assessing generative AI technologies and adapting decisions to the current landscape.
We’ve seen many operational efficiencies in the way our team can work. Yet, we’ve also had to become hyper-vigilant around protecting our data. We’re more careful than ever about what data is being used, for what, and when. In fact, we’ve incorporated personal data and AI in our updated privacy policy. And, we continuously monitor applicable data privacy laws and regulation.
As a result, we’ve developed a robust generative AI policy based on some of the following considerations:
- Ethics: Skillsoft is committed to engaging with GenAI in a way that is both ethical and responsible – ensuring that it remains a force for good. Our policy outlines how our organization expects its stakeholders to use generative AI technologies at work so we are all on the same page.
- Reputation and trust: As a learning industry leader, Skillsoft seeks to implement an AI strategy which is resilient and scalable in the myriad jurisdictions in which we operate. We know responsible and sustainable use of AI technologies is important to our development and sales operations, and we won’t do anything to jeopardize this.
- Compliance with applicable laws: At Skillsoft, our AI policy aligns with our values and principles; while at the same time taking care not to violate any ethical standards or applicable laws.
- Data privacy and security: Access to ChatGPT has been set up through our Azure enterprise account to afford the same protections as other Azure services.
- Bias and fairness: Teams using GenAI at Skillsoft have committed to making their best effort to implement adequate quality assurance (QA) to ensure the output of the generative AI models do not include copyrighted materials, illegal or impermissible bias (e.g., AI hallucinations), and that it does not otherwise violate applicable law or regulation.
- Social impact: We are dedicated to empowering our team to propel Skillsoft (and their careers) forward by ensuring Skillsoft is the place that people in our industry come to learn how to lead and succeed in a world of generative AI. We need everyone in the company to understand ChatGPT, including what it might mean for our customers and our company.
At the end of the day, GenAI usage across Skillsoft is to be governed by our Generative AI Policy, as well as individual team members’ judgment as to how they use the tool. Everyone is informed of the risks, rewards, and expected behavior – and we’re committed to working together to learn as much as we can to help us mitigate risk and optimize reward.
Look for continued educational opportunities from Skillsoft on GenAI and ChatGPT and challenge what is possible. Curious how Codecademy from Skillsoft is approaching AI?
Skillsoft courses are intended to guide and incorporate best practices that derive maximized value from the use of artificial intelligence. They are not intended to endorse or advocate for the methodologies, tools, or outcomes of the artificial intelligence tools referred to or utilized.