How to start a grassroots Responsible AI and Tech practice
Six steps to you can start right now to making your company's AI and technology more responsible
I'm going to start this week with a brief aside - I am fired up coming out of last week. I learned so much at the inaugural training cohort for the IAPP's AI Governance Professional training and then had the opportunity to listen to some phenomenal panelists and keynotes and rub elbows was folks whose CVs are the stuff of legend. Not to mention Biden's Executive Order on AI, the Bletchley Declaration coming out of the AI Safety Summit, to the draft proposal on AI usage from the OMB all happened last week as well.
I'm doing my best to cover the most important stuff as it relates to responsible AI, AI ethics, and AI governance. But let me know if there's something you'd like me to cover in particular - with so much going on, something important is bound to slip through my fingers.
With that, on with the show.
Last week notwithstanding, Responsible AI and tech is a pretty new concept. While bits and pieces have existed for decades, it's only been in the last 10 years or so that it's started to coalesce into a movement and a discipline. It's good that it's started to become more visible in our cultural awareness because the pace of innovation and capabilities of technology and AI have reached breakneck speeds. There's a real need for folks to help us create more responsible solutions with our technology and AI.
Some companies are on the cutting edge here, and they have governance teams, responsible innovation teams and the like to help make their AI and technology more ethical. But most companies don't. Chances are good you are reading this article and your company has no responsible AI or tech programming.
But you still want to get involved, and still feel passionate about limiting the harms of AI and tech! So what do you do? Start your grassroots program internally! How do you do that?
Well, I'm glad you asked. A lot of my work in responsible AI and tech is grassroots. It was a lot of me being the one person in the room ask the question the uncomfortable questions to help guide our teams to making more ethical tech.
This week, lets talk about how you can start your own grassroots movement to create more ethical outcomes in AI and tech.
1. Start with the Principles
Responsible AI and Responsible Tech start with principles - the rules that will guide and govern trade-offs, design and development, and priority decisions. In your grassroots efforts, you might be able to use your company's corporate values as a starting point. But in most cases, those values are too broad and 'hollow' to guide and govern decision-making.
Luckily, some folks have already done the heavy lifting for us - the OECD Responsible AI Guidelines, and the NIST AI Risk Management Framework (RMF) are great places to start. While the nuance of the principles might change a bit, the broad intent of the principles is pretty much the same across different frameworks and focuses on things like trustability, fairness, explainability, and transparency.
But wait, you say, this is just for Responsible AI, what about tech in general? Good news my friend! The AI principles can also apply more broadly to tech. The same trustability, fairness, explainability, and the rest are just as important to broader tech as they are to AI.
As a grassroots program, you likely aren't going to be able to champion everything in the frameworks I've linked above. You have to pick your battles. Start with one, or two principles at the absolute most and think about how you can embody those principles within your organization. Think about how you aren't embodying that principle well, and what it would look like if you were.
Let's move on to the tools to help you embody those principles.
2. Use the Tools
It's intimidating when you are bootstrapping a grassroots Responsible AI and Tech practice. There's so much out there on the web about AI and a lot of it is a little suspect. Luckily, there are exceptional free tools out there to help you out. While they are geared towards formal programs within a company, the foundation is the same whether it's grassroots or formal. We hope that our grassroots efforts will grow into something formal.
Thoughtworks has already created a playbook for us to get started, and I wanted to call out a few of my favorites. I'm not going to spend a lot of time on them because the folks over at Thoughtworks have already done such a bang-up job. And there are a bunch of others I didn't call out.
Consequence Scanner - prompt cross-functional teams to uncover unintended consequences, and focus on the ones you want to mitigate
Responsible Strategy - helps unpack organizational values and make them actionable
Tarot Cards of Tech - There's a non-zero part of me that just really likes the troll factor, but the tool itself is awesome for helping to create hypothetical situations with your product and speculate on the outcome
Ethical OS - Gives you a framework of risks to give you the language to talk about different types of risks and how you might approach them
3. Ask the uncomfortable questions
As you go through the tools to help you bootstrap your grassroots responsible AI and tech program, you might notice something. It involves asking a lot of questions, many of which are uncomfortable. But this is the heart of your grassroots movement. Steps 1 and 2 were just setting you up for the kinds of things you need to ask about, and now we start with asking the actual questions.
This is hard because it often feels like you are being a buzzkill in the room. When everyone else is excitedly talking about all the positives, you ask, "OK, but how does it go wrong?" --that can be a uncomfortable place. But this is where the value of responsible AI and tech programs comes in - and why it's so easy to start a grassroots movement. At the end of the day, you just ask a lot of questions.
You want to make sure your questions are reflective of the principles that you targetted in Step 1. A grassroots effort isn't going to boil the ocean, so you have to pick your battles.
4. Advocate for diverse folks in the conversation
Inclusivity is a huge part of the responsible AI and tech ecosystem. I just recently wrote about the importance of DEI in AI as a means to achieving ethical outcomes -- you can read it below.
If you are in the design and development conversations of AI and tech, then you are the best suited to advocate for a diverse group of people. Diverse most often means diversity across historically marginalized communities, but it also means ensuring you have the right cross-functional folks in the conversation, and folks with diverse perspectives and opinions.
But advocating for them to be in the room is only the first step. The second is that you need to empower them to contribute. Inviting someone from the black community to your design session but then ignoring them isn't being inclusive. They need to have a voice in the conversation. They must be seen and heard, not only seen.
5. Get used to being "That person"
This one sucks, and I still struggle with it sometimes. I'm "that guy" in my leadership team. Everyone knows that if I'm there, I'm going to ask questions that aren't fun to answer - I'm going to point out issues they don't necessarily want to think about. For a long time, I was called "negative nancy", a "debbie downer", "Buzz-kill" and " the doom n gloom guy," because I consistently asked uncomfortable questions.
I want to say it gets easier to be "that person" in the room, but it doesn't. You just get better at dealing with your discomfort.
But I did notice a thing that happened over time. People would eye-roll when I had a question, but they had an answer. They had anticipated I was going to ask it and they were prepared. I stopped being the "buzzkill" and started being the "conscience" of the team. Whenever thorny issues came up, my peers on the leadership team started to look to me to help them navigate them.
I still get the "buzzkill" guy, and the eye-rolls, and I still get the internal anxiety over being "that guy", but I can also see the impact I've had, which makes it worth it.
6. Find your people
I put this as the final point on our list, but it could go anywhere. Even if there aren't other people talking about responsible AI and responsible tech in your company, I can guarantee there are others who are interested. I would be willing to bet that you have already thought of a couple of people. So start the conversation with them. Talk with them about what you want to do, and why you want to do it, and invite them to join you. They may, or they might not.
But in my experience, even the folks who won't take a lead role in asking the tough questions will step up to support me after I've asked it. That's just as valuable.
Outside of your organization, there are many great communities in responsible AI and responsible tech that you can get involved in. Organizations like
(transparency: I'm an Affiliate in Responsible AI at All Tech is Human), there's the Responsible AI Institute, lots of great newsletters here on Substack, and communities of folks on social media - we are everywhere. I said diversity is important, and your voice is valuable in these communities, so I encourage you to join one or all of these groups.
Thanks for this - especially the tip for being “that guy”. I’ve always worked at companies that have their own playbooks and processes for these sort of things for me to follow so having this info is very helpful for things I build on my own. Thanks for the link to Thoughtworks too.