Ethics in Action: Am I an AI Hypocrite?
The answer is no; but you already knew that. You aren't here for the easy answers
Fam, I have a confession to make. Byte-sized Ethics is mostly a one-man show, and I've used AI tools to help me write every article. I’m conflicted about it. On one hand, it makes writing BSE way easier and way more effectively than it would be otherwise. But on the other hand, I worry about the tone of my writing, taking money away from professional editors, and the exploitive labor that builds AI.
Today on Byte-Sized Ethics, I walk through how I’m using AI in my writing process, the benefits, the harms, and the uncomfortable feeling that never quite leaves no matter what choice I make. All to answer the question for myself - Am I being ethical in my use of AI? Or am I being a hypocrite?
I write the drafts in the free version of Grammarly, relying on its AI to identify basic editing errors, help me eradicate unneeded adverbs, and hit the tone I want.
After that, I bounce over to Claude, Anthropic's AI Chatbot, drop the full text of my article in and ask him a few questions. What he thinks the main points of my article are, how effective my article will be on my substack, and what suggestions he has to make my writing better? I review all of his responses and make the call about whether to include his suggestions, only make minor tweaks, or ignore them all.
I feel guilty that I'm taking money away from human editors. A human editor's quality is leagues beyond what Claude or any AI can give me. I recognize that. But I'm not getting paid for writing Byte-Sized Ethics (yet), though I hope to eventually generate an income here. Editing is hard work and human editors rightly charge for their expertise. It's hard for me to justify that cost on a thing that's not generating any income for me when Claude and Grammarly can get me 'good enough' at the moment.
I'm also fitting writing Byte-Sized Ethics in the gaps of my day right now. That means lots of last-minute writing, editing, and changes--usually right up to the moment I hit that publish button. Working with a human editor would be challenging because I would need to structure my writing time better and plan more. That's not a bad thing, but it's also not something I'm doing as I bootstrap my Substack.
I struggle to reconcile the two values--AI is a force multiplier and helps me achieve a level of quality I couldn't on my own, but AI also replaces human jobs because I could be paying a human editor to do the same thing. As an AI Ethicist, my goal is to help limit the harm that AI causes. I constantly wrestle with the question--is my use of AI causing harm?
Causing Harm is Inevitable
The answer is messy. The short answer is Yes, my use of AI causes harm. But the answer is also Yes, not using AI causes harm too. If I stopped using AI, the quality of my writing would suffer, which would limit my reach and result in less industry change towards ethical AI. I am harmed because I can't achieve my goal, and the broader industry is harmed because my voice isn't in it. But on the other hand, using AI means that I'm harming folks who rely on writing and editing to earn a living.
Humanity is so interconnected and interdependent that its impossible to make a decision involving technology that doesn't harm someone, somewhere - either by perpetuating exploitive systems, taking away jobs from humans, contributing to climate change, or deliberately limiting your own potential which can harm you and those who depend on you.
We are surrounded by trolley problems, trolley-problem inception if you will. There are dilemmas within dilemmas within dilemmas and every action we take has negative reactions. My choices involving the use of AI are no different. No matter the path, I'm causing harm.
Do I think I’m a hypocrite?
So do I think my use of AI is ethical? Today, I do think my use of AI is ethical. It's not because I've found a path where there's no harm caused by using or not using it, but rather acknowledging to myself that there is no path here I don't cause harm somehow. After acknowledging that, it's a matter of determining what amount of harm is acceptable for me to cause, and to whom.
Right now, the harm caused by me using AI is pretty minimal. Without the tools, I would be trying to handle all of this myself, or repeatedly imposing on my husband or friends to proofread. I'm not at a point where it makes sense for me to engage an actual editor. So in my personal values and morals, the good I can do by using these AI tools to more effectively communicate about AI and tech ethics outweighs the harms caused through exploitative labor practices and climate change impact, assuming that these harms exist, but are infinitesimally small at the individual level.
My conclusion isn't permanent. As the situation changes and evolves, so does my determination about whether my actions are ethical or not. If I get to the point where I could start paying an editor, my stance could change. If we find out that using AI is dramatically accelerating climate change, my stance would change. Morals, values, and ethics are ongoing and participatory. We can never claim a "win-state" in being ethical. Our society evolves, and so what we consider ethical evolves along with it. It's a constantly moving target.
As Shotokan Karataka, I think often apply Guchin Funakoshi's 20 Principles of Karate to other parts of my life. There's one that perfectly describes how I view acting ethically with technology - "Do not think of winning. Think, rather, of not losing."
What about you? Do you think my use of AI is ethical? Let me know in the comments!