Does Your Company Need an AI Use Policy?
If you have employees, you need an AI Use Policy to avoid tears, weeping and gnashing of teeth in the future.
Recently, one of the engineers in my company had a question. They asked, "Is it ethical/acceptable to use code from ChatGPT?" They had asked in a public channel on Slack, and I wanted to make sure I responded in a way that was clear and straightforward. So, I pulled our "Responsible AI Use Guidelines" as a reference and responded with, "No, it's not acceptable to use code generated from ChatGTP in our products."Â
This was a conversation with engineers so the back and forth went on for a bit as they tried to find the "limits" of my "No." I copied the specific line from our AI Use Policy, channeled my inner Cady Heron, and said the limit does not exist. There is no threshold of complexity or simplicity where using code generated from ChatGPT was suddenly permissible. The engineers weren't happy with the strictness of the policy, but they accepted it.Â
This was the perfect example of why every company needs an AI Use Policy. The policy allowed me to clearly and unambiguously answer what could have been a very complicated rabbit hole of edge cases and potential uses. I was able to help guide my colleagues to the outcome that the company risk-tolerance would allow.Â
This is the exact reason that every company needs an AI Use Policy.Â
Why does my company need an AI Use Policy?Â
No matter your role within your company, there's a way that AI can help. Sales can use it to draft emails, Product can use it to write out requirements docs, engineers can use it to write code, marketing can use it to help write copy, and internal communications can use it to write internal memos. Your colleagues are going to use AI tools to make their jobs easier. They are going to expose your company to a risk that you don't want.Â
You need an AI Use policy to help guide your employees on what they can and cannot do, and why they can't do those things, and help protect the company from unacceptable use of AI tools that might cause harm to your employees, your customers, and your company overall.Â
IP Risk
We still don't know how the courts are going to apply intellectual property law to both the training and the output of these models. We've already seen models generate protected content, leaking company confidential information, and rights-holders suing the major foundational models like ChatGPT and Claude for copyright infringement.Â
Who owns what?
There are unanswered questions about who owns the content by an LLM, even when it's not infringing content. For example, if two companies both use ChatGPT to generate the same piece of content, and both claim the rights to the content, who wins there?Â
A lawyer friend of mine told his teams this: "If you use any AI-generated works from the public ChatGPT, we lose all ability to copyright, patent, or make any IP claims over creation. It is essentially public domain at that point."Â
Adversarial PromptingÂ
Adversarial prompting is a second-order risk for leaking company confidential information. Adversarial prompting is when threat actors attempt to mine information that they can then use to exploit or otherwise harm your company. They'll prompt you to get the foundational models to generate real information about your company that your employees provided to the model through using the free version of ChatGPT. Â
These are just a few examples of risk even if you aren't developing or officially using AI in your business.Â
What should your AI Use policy include?
There are many different things you could put in your AI Use policy. In most cases though, you want to keep it simple, straightforward, and unambiguous. The more complex, nuanced, and complicated your policy is, the harder it is for your employees to follow it. Your AI Use policy needs to include:Â
Allowed AI tools/features - If you've purchased any AI tools for your company, you want to make sure you call them out here. Maybe you are using Github Copilot, or your marketing department is using Jasper.AI for example. Keep in mind that your allowed tools and features may be functional within other tools that you've already purchased. For example, you can purchase an AI addon for Salesforce, or Productboard.Â
Allowed Use Cases - You need to specify in what use cases you can use the AI tools and features to achieve because not every use case carries the same risk. For example, for brand new, unannounced features you might say that using GitHub Copilot is disallowed, but for enhancements and improvements to existing features, it's allowed.Â
Disallowed Uses - You need to list what AI can never be used for. For example, you might say that AI can never be used to write speeches for the CEO, or can never be used in your public-facing products. This section should be explicit, unambiguous, and broad-scoped.Â
How to Get Help - Your policy should reference a review process for new feature AI features, tools, and uses. AI is complicated, and a lot of folks don't know how to think about risk when it comes to AI. You need your policy to help them know how to get answers to their questions. While you have an explicit allow-list and an explicit disallow list, there's an ocean of gray between those two. You need to help your colleagues navigate.Â
What happens if you violate the policy - Your policy needs to have repercussions behind it to get folks to follow it. This needs to be tailored to the needs and risk tolerance of your organization, as well as the scope and breadth of violation and potential repercussions. Finding out that someone used ChatGPT to write an email that violated policy will require a different reaction than someone who drafts an entire public-facing blogpost using ChatGPT.Â
Regular Revisions and Updates - Over the next few years, you should expect to be updating your policy frequently. AI use is moving at breakneck speed, and you'll need to be constantly interacting to stay current on the latest use cases, tools, and innovations.Â
An AI Use Policy is the bare minimum that every company needs to have right now. For companies that want to do more than the bare minimum, you should also develop AI Code Ethics, AI Governance and review processes, AI Ethics Councils, Responsible AI Use Notice to your customers, and more to meet the needs of a rapidly evolving functionality and workforce.Â
What do you think? Do you need an AI Use Policy, or do you think it’s overkill? Let me know in the comments!
Love this article? Share with a friend and spread the good news of Responsible AI!