Should you care about AI wiping out humanity?
No. Probably. But maybe yes. It is a question that is a lot like an onion - it has layers. Also, trying to get to the center will probably make you cry.
I've been digging into this whole "AI is going to wipe out humanity" thing lately. The argument feels so easy to accept because as a people, we've been anxious about this for decades. Science Fiction has no shortage of stories about AI deciding that the most logical outcome is the annihilation of the human race. The stories are always kind of mum about why this is always the outcome, so it is always presented as self-evident.Â
Setting the topic of why we think this is the only possible outcome for a superintelligent creation, it is pretty clear to me that most of the AI doomsday predictions are silly. This then made me wonder, should we actually care about AI wiping out humanity? If not, what should we care about?Â
Welcome to this week's Byte-sized ethics, where explore this question without any further references to onions.Â
Should you care about AI wiping out humanity?Â
No, you shouldn't care, for three reasons. The first is that these AI Doomsday scenarios aren't real today. They are, at best, circumstantially hypothetical. While AI wiping out humanity is possible, it's not probable. There are many reasons for this - AI still struggles with processing novel information it's never seen before, it doesn't actually "know" anything right now, and it vomits out the statistically most likely text. Anytime that it answers something correctly it's unintentional and accidental -- because AI doesn't have any intention at all. Think about how you define what it means to 'know' something. It's deceptively difficult to do. Imagine trying to code 'knowing' into a thing when you struggle to understand it yourself.Â
The folks pushing the AI Doomsday scenarios are relying on your ignorance about AI today to create scare us, virtue signal, and make money (because of course there's money involved). The gulf between where we are today and where we need to be for any doomsday scenario to be realistic is huge. Your toaster is as much of an existential threat to humanity as AI is today.Â
Second, AI has been around for decades. Less sophisticated absolutely, but we've been using artificial intelligence to make decisions for us for years. If we act like this is suddenly a new thing we have to deal with is disingenuous because AI isn't new, and its impact on our lives isn't novel, it's just more visible now.Â
Third, worrying about an AI Doomsday takes away focus from the problems AI has today. AI is causing us problems and hurting people the world over, and the Doomsday folks would like you to not think about that. For example, Allegheny County Children and Youth Services still uses artificial intelligence that flags economically disadvantaged families as high risk for child abuse, because they are economically disadvantaged instead of actual indicators of abuse. These families are subject to random inspections for instances of child abuse not because of anything they did, but because the model said they might.Â
Police are also using AI to predict recidivism rates - which are frequently incorrect and consistently flag people of color as higher risk of recidivism, and then make recommendations about parole based on this recidivism model that they know is unfair.Â
Insurance companies are using AI to bulk deny claims. Your insurance claim for your necessary medical procedure could be denied by your insurance company, without a human being ever actually seeing it.Â
But the Doomsday Folks aren't all bad
While #TeamDoomsday is wracking their brains trying to find increasingly outlandish ways AI is going to annihilate us for more clicks, their work isn't completely without value. As I discussed in my article last week, the doomsday scenarios are rooted in real problems that we see today. If we take principled steps to address the harms are experiencing today from AI, the doomsday scenarios that were already improbable become even more improbable.Â
Should you care about the problems AI causes today?Â
You should, without a doubt. It's easy to say that the problems are small, or that you don't need to worry about them because it is not happening to you. But if AI is left unregulated, AI will cause problems for everyone.Â
Here are a couple of not-very-far-fetched scenarios:Â
You apply for a new mortgage for a new home. It's well within your means to make the payments and you meet all the requirements you need to, but you are denied a mortgage because your sibling defaulted on a credit card payment 5 years ago. Seems absurd, right? But a very similar thing is happening today in the recidivism software. People are rated at a higher risk for recidivism if they know someone else who's also been convicted of a crime.Â
An insurance company gives its AI a goal to optimize profits for the organization and gives it the authority to act without human intervention. The AI starts denying claims for life-saving payments for patients because letting those patients die increases profits for the insurance company. The insurance company sees recorded profits, and it's months before anyone notices the trend. Thousands of people die as a result of being denied care for life-saving treatments. Â
You receive a video call from your best friend. They recommend a new product for you to buy. You spend a significant amount of money on the product and call your best friend back to talk about it. They have no idea what you are talking about. Turns out that an advertising agency used AI to infer the relationship between you and your best friend, and that you are more likely to make a purchase based on their recommendation. The AI creates a deep-fake, interactive video, recreates your friend's voice, and then places the video call to you to make the product suggestion.Â
These are hypothetical situations that are months or years away at most with huge ramifications for how we interact with the world and each other. These are just a few possible situations--there are thousands more out there that we haven't thought of yet.Â
Let's not Forget the Good Stuff
I spent a lot of time talking about the harms of AI, but it is not all bad, or even mostly bad.Â
AI is being used to help get earlier predictions of cancer
Help us manage our man-made climate change
Improving healthcare by discovering new proteins
Help Prevent FraudÂ
Make our cybersecurity more effective
These are big ways that AI is good for us. But it also helps in smaller ways - giving us personalized information, helping with writing and editing, and even detecting heart arrhythmia from your wrist. AI is going to make life a lot better for a lot of people -- ethical AI is making sure that everyone gets a piece of the good stuff.Â
What does it mean to care about Ethical AI?Â
What it means to care about ethical AI is different for different people. As an individual and a citizen:Â
Caring about Ethical AI is letting your legislators know it's important to you, and that you support efforts to regulate. Most of the time we only talk to our legislators to complain, but it's equally important to reach out and let them know when you support something they are doing.Â
Even easier than that, just talking about ethical AI helps keep it top of mind.Â
Support the people talking about Ethical AI, like subscribing and pledging to this newsletter, and other's like it.Â
Support companies that champion ethical AI - that make commitments on their stance for ethical AI, protect your privacy and generally do right by you
Don't support companies that don't champion ethical AI. This can be tough to know who does and doesn't champion ethical AI, but in cases where it is egregiously obvious, don't support them.Â
As a professional, a developer, product manager, compliance specialist, or really any other role:Â
Ask uncomfortable questions at every stage of the development lifecycle.Â
Be intentional - you can't solve every problem you uncover. Be intentional and transparent about the problems you aren't solving, and why
Be consistent and persistent. Ethical AI isn't set-it-and-forget-it. It's a moving target and as we grow as a culture, what "ethical AI" means will change. Build your ethical AI muscle memory.
Wrap Up
AI isn't going to turn into SkyNet any time soon if it ever does. But that doesn't mean we shouldn't care about ethical AI. There are plenty of problems with AI today and many more that will arise in the coming months and years that we will need to deal with. But not everything is doom and gloom, and we should care about the helpful outcomes of AI just as much as the harmful outcomes.Â
At the end of the day, the question of should you care about ethical AI? The answer is yes. Ethical AI ensures we get more helpful outcomes from AI and fewer harmful ones.Â