How to Limit your AI Risk
Ignoring the problem doesn't make it go away - you have to take steps to educate and guide AI usage.
So you've been reading up on GenAI and the 'AI All the Things' trends, and you're getting concerned. It seems like there's a lot of risk, AI is everywhere and it's impossible to get your hands around what's happening, and protect your company, your colleagues, and your customers from AI-induced risk. You aren't alone. Most companies are taking a much more conservative approach than what the "thought leaders" on LinkedIn would have you believe.
So what do you do? You limit the risk by being intentional in all the ways you use AI, and your vendors use AI.Â
In this article, I'll walk through high-level tasks you need to take to help reduce risk to your company, your employees, and your customers. First, make sure you have a policy written. Then, review all of your vendors and find out who's pushing AI solutions. Finally, make it part of your company culture.Â
I’m happy to help turn these articles into a reality for your organization. If you have questions, problems, or just want to discuss approaches, feel free to reach out to bytesizedethics@substack.com and we’ll setup some time.
Implement an AI Usage PolicyÂ
First, you need to codify that you want to avoid AI usage. I've already written out the aspects your policy should contain. This policy will guide the rest of your approach. If you plan to allow some limited AI usage, this policy becomes even more important for you to list out allowed tools, disallowed tools, and allowed/disallowed usages.Â
The policy should give guidance to your company and how (not) to engage with AI and give your colleagues the tools to make responsible decisions, and let them know what could happen if they violate these guidelines. A well-crafted policy is your first line of defense against risky AI usage.Â
If you want to limit AI risk as much as possible, your AI policy must include how to respond to vendor AI usage. After governing your employees' use of foundational models like ChatGPT, how your vendors use AI is the greatest AI risk to your company. While you might be doing your due diligence, there's no guarantee your vendors are doing the same. The AI landrush means that a lot of vendors are stapling AI offerings onto their existing products and in many cases, enabling those features for you without your knowledge or consent. It's great (/s).Â
Conduct a Vendor Review
Once you've written the policy with a vendor AI usage section, you need to do a vendor review to see which vendors are using AI. This could be a time-consuming process if you have a lot of vendors. The best way to do this is to come up with a questionnaire and require your vendors to complete it.Â
Overall, you'll want to know if they are using AI, how they are using it, what data powers it, and how they manage the data they send to the foundational models. Some of the things you'll want to ask:Â
Does your solution include any AI features, specifically generative AI features like those powered by ChatGPT?
How is the AI used? What goals or features does it enable? What does it do in your product?Â
What foundation model(s) does your feature use (i.e. ChatGPT 4.0, Claude 3.0, etc, In-house)?
Can I enable or disable these AI features?Â
What data is sent to the model?Â
Can I control what data is sent to the model, by field or record?Â
Is my data used to train the model?Â
What assurances, outside of contractual obligations, do you have to ensure that the model isn't being trained on my data?Â
What is the retention period of any data I send to you?Â
These questions should help quantify the risk you assume by using this vendor, and their AI features. The biggest area of concern you'll have is what data the vendor is sending to the foundation models, if that data is used to train the models, and the data retention.Â
There's a good chance your vendor won't be able to answer all the questions. But their inability to answer tells you something about the risk you'd be adopting by using their AI tools. My experience in using a similar questionnaire was that vendors couldn't answer all the questions. I asked what kind of assurances they had, outside of contractual obligation, that the data they sent to the foundational model wasn't being used to train that model. No one has had anything beyond the contract thus far, which is problematic because it's almost impossible to tell if your data has been used to train a model until a breach occurs.Â
Once you've gathered all of this information, you can make decisions about whether the use is acceptable for you or not. You should track all of this information somewhere, through a tool like Trustible.ai or Enz.ai. At a minimum, drop all of this information into a spreadsheet for tracking.Â
Incorporate it into Your Company Culture
Now you've got a policy, you've done an audit of your vendors and their use of AI, now it's time to make it part of your company culture. This can be a daunting task, especially considering how seductive GenAI can be as a perceived efficiency booster. But there are a few methods to help:
Make your employees review and acknowledge your AI Usage policy.
They will grumble about it, but you went to all that effort to write the policy, you want to make sure that everyone reads it and acknowledges it because you don't want anyone to be able to say, "I didn't know."Â Â Â
You should make your policy acknowledgment a yearly thing, and keep it posted in an easy-to-find, easily referencable place. You also need to talk about it frequently. It shouldn't be a "I check a box yearly" and never think of it again. AI is changing rapidly, and you want your employees and colleagues to think about how they use AI all the time, not just when they click "I acknowledge".Â
Update your MSAs - with customers and with Vendors
AI and GenAI are still getting a lot of attention... even 18 months after OpenAI released ChatGPT 3.0 to the public. With this attention, fueled by hyperbolic and bombastic claims from "experts", your customers will be on high alert for genAI usage, and you should be too.Â
Work with your legal department to codify in your MSAs with customers and with your vendors the rules and guidelines around AI and GenAI -- you don't want any surprises and you want to make sure you are clear with your customers about how you'll use GenAI so that they don't have any surprises.Â
Incorporate your AI Questionnaire into your vendor-sourcingÂ
Take that questionnaire you created before and make it part of every vendor evaluation so that you know what you are getting into before you sign the paperwork. Because AI is evolving so fast, you also need to have a 6-month or yearly review with the vendor to ensure that their answers are accurate.Â
Bonus Round:Â
You might also want to make a public-facing AI usage Notice and guidelines so that it's easy for your customers to see and find. But, you should only do this if you are confident you can abide by what you are saying. If you post a public notice and then don't follow what you said, the FTC could show up at your door.Â
There you go, three straightforward ways that you can limit your AI exposure and risk through policy and process. What do you think? Do you have any other ways to help limit your AI risk?Â
Do you need some help implementing these recommendations? I'm happy to help - just shoot me an email at bytesizedethics@substack.com