What do all these different AI Ethics terms mean, anyway?
Responsible AI professionals use a lot of words that look like they could mean the same thing. Or do they? What do all these things mean?!
I've worked in technology and product development for over 15 years, and I've learned there is one thing you can always count on in tech: people have 309 different words and acronyms to describe almost the same thing. The field of Responsible AI and Responsible Tech is not an exception.
We use terms like Ethical AI, Responsible AI, Responsible Tech, Responsible Innovation, Trustworthy AI, and more almost interchangeably. They are all "in the same ballpark" of meaning, but all describe different aspects of the same discipline. Because they are so similar, it is challenging to know if Responsible Innovation is the same thing as Responsible Tech, or if Trustworthy AI is the same as Ethical AI.
This week, I thought it would be time to describe how I use and understand all these different terms and how they work together. I bucket these terms into 3 categories - "What is the ethical characteristic do we want our software to have," "How do we achieve it?", and “What is it called when we think we've achieved?"
Keep in mind, that this is how I differentiate these terms. Others might have different definitions or understanding. But this should give you a solid foundation to better understand the whole discipline.
Tech Ethics / AI Ethics / Responsible AI Principles / Responsible Tech Principles
This category of words answers the question: what ethical characteristics do we want our software to have? Typically, these terms refer to a collection of morals, principles, and values that align with or exceed accepted standards of behavior. These are morals, principles, and values are characteristics like fairness, freedom from bias, transparency, or explainability.
Companies often articulate these characteristics through a "Code of Ethics" or company values. So what each company means when they say "value-aligned" or "ethical" is different. Where there is a lot of variation, most tech and AI ethics refer to the 7 characteristics of trustworthy AI called out in the NIST AI Risk Management framework: reliability, safety, security, explainability, privacy-enhanced, fair, and transparent.
The main thing to remember with tech & AI ethics is that they are abstract and not actionable. They are abstract because they describe particular characteristics, but not the methods used to achieve those characteristics. They are also subject to interpretation. For example, what fairness means to me might be different than what fairness means to you. The same goes for transparency or explainability. The answers to questions like this vary based on who you ask and in what context. Finally, ethics and principles are not actionable on their own. I cannot implement explainability, nor can I implement fairness. Instead, I have to take more steps to help me achieve those characteristics.
Responsible AI / Responsible Tech / Responsible Innovation
This category of terms refers to the guidelines, policies, and processes that help an organization express the characteristics of their tech and AI ethics. Collectively, these guidelines, policies, and processes come together in a Responsible AI / Responsible Tech program, team, or department within an organization. This group of people, or person, will use guidelines, policies, and processes to help ensure that the AI or technology created has the characteristics defined in the Code of Ethics or the company values.
These teams will also frequently be responsible for compliance with regulations. While there's not much formalized AI regulation yet, we know there's a tsunami of regulation and compliance requirements on the horizon. These teams are frequently responsible for ensuring AI and tech are aligned with company values and codes of ethics and at the same time ensure the company is achieving their compliance requirements.
However, most of the time achieving compliance with regulation in technology and AI is the bare minimum that a company has to do, and will often fall short of achieving any real ethical outcomes if they stop there. The presence of a Responsible AI / Tech / Innovation team is indicative that a company wants to achieve a higher standard than what is legally required.
Responsible AI / Ethical AI / Ethical Tech / Trustworthy AI
The final category of terms covers the alignment with the stated code of ethics, company values, and compliance requirements. If a company develops an AI that meets or exceeds its code of ethics and compliance requirements, the company will say that they've developed Ethical AI, Responsible AI, or Trustworthy AI.
This designation isn't an official one, so it's important to remember that what a company decides is ethical or in alignment with its values, might not align with what *you* think is ethical or in alignment with your values. As an individual, when a company makes a claim about ethical tech, you always need to dig in to see what their values or code of ethics actually are to see if they align with your personal values and ethics.
There could be an exception to this in the future as the NIST Framework mentioned above calls out "Characteristics of Trustworthy AI", Biden's Executive order from a few weeks ago also references Trustworthy AI, which could signal that "Trustworthy AI" will mean something specific in the future. Whereas ethical AI will still be open to interpretation.
So what do you think? Does this framework make sense? Are there other terms that you heard? Where do they fit, or not? Let me know!
They say that a Rose by any other name would smell as sweet, but when it comes to AI ethics, the debate over terminology can get thornier than a cactus. It's a bumpy road, but somebody's gotta keep those algorithms in check.