The 'No Section 230 Immunity for AI Act' Misses the Mark
And causes a lot more problems in the process
Lately, I've been thinking a lot about a particular interaction from one of my favorite book series, Otherland by Tad Williams. In this particular scene, one of the protagonists, Renie, is talking to a person whose demeanor is so flat and formulaic that she asks if they are an AI, knowing that in the book, AIs are required to identify themselves when asked. It's also considered a great insult if you ask a human if they are an AI. It turns out in the book that Renie was, in fact, just talking to a particularly dull human who was very offended by her asking.
When I first read about the No Section 230 Immunity for AI Act last week, the first question that popped into my head was born out of that scene in Otherland—how exactly would you know if something is AI-generated content or not? As I pondered that, I realized that there were, in fact, many other glaring issues in the bill. It was out of this pondering that this article was born.
Today, I set the stage for what Section 230 is and why it matters. Then I discuss how the bill, No Section 230 Immunity for AI Act, changes Section 230. After that, I delve into the many problems I see with the bill and how I think we could do better.
For the record, this article was not written by artificial intelligence, but I did use it for some help with copy editing. Jus' saying ;-)
Background
Senators Josh Hawley (R) and Richard Blumenthal (D) introduced a introduced a new bipartisan bill that aims to strip Section 230 protections for websites with generative AI content. Broadly speaking, Section 230 protects platforms from liability for user-generated content. Section 230 grants broad protections to platforms like Facebook, Twitter, and others, shielding them from being held liable when a user posts something illegal. Removing Section 230 protections from generative AI content would make platforms legally liable for any instances involving generative AI, which would have a broad chilling impact on media platforms and generative AI as a whole.
The bill is attempting to get ahead of the harms we see coming from generative AI. Today, we have tools that can create deepfake videos of people doing things they've never actually done. We can synthesize anyone's voice with almost no content, and we've been able to manipulate photos to tell a different story for decades. It's a scary prospect, and there's almost no accountability. Often, deepfakes just appear on the internet and get propagated without any indication of who created them. With generative AI, this problem worsens because it becomes easier to generate content.
So, the senators are trying to stem the tide of harmful generative AI content by making internet platform providers liable for the content, stripping the immunity that they currently receive from Section 230 for AI-generated content. The idea is not necessarily bad in the grand scheme, but this approach creates more problems than it solves and ignores the fundamental issue that we often can't tell when something is generated by AI versus a person.
What is Section 230?
Section 230 is part of the Communications Decency Act (CDA) of 1996, originally intended to protect minors from sexually explicit materials. The majority of the CDA was declared unconstitutional in 1998 because it violated the First Amendment. However, the courts severed Section 230 from the rest of the CDA, and it remains in effect today. Many experts credit Section 230 for the explosive growth and importance of the Internet.
It consists of two paragraphs: the first is known as the "Leave up" clause, and the second is the "Take Down" clause. Together, they grant covered platforms broad liability protections for user-generated content they decide to leave up or take down, removing the threat of liability for infringing or illegal content from the service provider. It enables features like comment sections, personal ads, forums, and essentially any platform on the internet where users can submit content.
I do more of a dive on Section 230 in a recent piece on MassivelyOP that I encourage you to read if you are interested.
No Section 230 Immunity for AI Act
The bill itself is short and only proposes one change to Section 230 - an additional paragraph that states:
(6) NO EFFECT ON CLAIMS RELATED TO GENERATIVE ARTIFICIAL INTELLIGENCE.—Nothing in this section (other than subsection (c)(2)(A)) shall be construed to impair or limit any claim in a civil action or charge in a criminal prosecution brought under Federal or State law against the provider of an interactive computer service if the conduct underlying the claim or charge involves the use or provision of generative artificial intelligence by the interactive computer service.’’; and
Then later, the bill adds a definition for Generative Artificial Intelligence:
‘‘(5) GENERATIVE ARTIFICIAL INTELLIGENCE.—The term ‘generative artificial intelligence means an artificial intelligence system that is capable of generating novel text, video, images, audio, and other media based on prompts or other forms of data provided by a person.’’.
For example, let's say a hypothetical person named Jared posts two different images online. In one image, Jared has hand-drawn a cartoon that contains libelous content against a business. The only action he's taken is to draw the image, scan it, and then post it online. In the second image, he wants to see what an AI image generator can do, so he posts the resulting image from the generative AI using a prompt describing his cartoon.
Under Section 230 as it stands today, if the business wants to sue for the libelous content, in both cases, they would sue Jared, not the platform where he posted the images. Section 230 protects the platform from liability.
However, things get more complicated if the Act makes it into law.
In our first scenario, Jared is solely responsible for the creation of the infringing content, and he keeps liability. In the 2nd scenario however, liability is either shifted or shared with the internet platform where Jared uploaded the content...assuming we could tell it was created by generative AI.
Even if we could define it, we couldn’t actually find it
There are some major challenges with the bill as it stands today. The definitions for artificial intelligence, artificial intelligence systems, generative artificial intelligence are too broad and would extend beyond what the senators intended. Things like erasing people out of photos using the photos app on your phone would be captured here and even using autocorrect on your fall under this exemption for AI.
It's a difficult task to define AI in a way that's not overly broad or overly narrow. Our legislative branch isn't known for being the most tech-savvy group of people—this task is beyond their capabilities. Experts in the field struggle to define the point where an algorithm and code become artificial intelligence. It’s unreasonable to expect that non-technical legislators would be able to.
They might be able to address some of this with a "material change" or "material contribution" requirement, which would necessitate a certain threshold of materiality for the offending speech to be reached before liability could be assigned to the internet platform. However, adding such a requirement highlights the nonsensical nature of the immunity exemption. Fundamentally, the internet platform has no more or less control over the content posted than any other source.
But at the end of the day, no matter how you define it or try to limit it, the entirety of the exemption is unenforceable because we can't reliably tell if something was AI-generated or not. We don't have a mechanism like Renie's where she can ask, and people are obligated to answer. We have a few tools that claim to detect generated content, but their effectiveness and future effectiveness remain unclear.
As generative models continue to evolve, the markers that might indicate generated content will also continue to shift. Any tool that detects AI-generated content will be in an arms race with the models themselves. The models will evolve as they incorporate more parameters, which means how they generate content will also evolve. So these tools will have to keep sprinting to keep pace.
Additionally, these things all function at the individual level, but there's no way to process at scale. It's not like these tools are incorporated into Facebook, or Instagram, or TikTok today, though they very likely will be in the future. But even then, these detections systems won't be 100% accurate, and will run into the same arms race issue that the individual tools today are facing. At best, we can only hope to identify some of the AI-generated content, but never all of it. This gap would allow bad actors to exploit the reliability of AI-generated content detection systems to manipulate and assign blame where they'd like.
How to do better to get ahead of the harms of AI?
The core issue with generative AI content is authenticity. People want to know whether they are engaging with content generated by artificial intelligence or content generated by actual people. This authenticity becomes more crucial as generative AI becomes more capable and it becomes harder to distinguish between real content and fake content that was generated. The misinformation and disinformation we've experienced in recent years, resulting in the needless deaths of hundreds of thousands of people, will only worsen as generative AI advances.
By implementing labeling mechanisms, we can create greater transparency and accountability in the use of generative AI. This will allow users to make informed decisions about the content they consume and interact with. Additionally, it can encourage responsible usage of generative AI technology and help distinguish between AI-generated content and content created by humans. It's not a cure all, and there will be individuals who circumvent the labelling, for both innocuous and nefarious purposes. But it's a start.
Addressing the challenges posed by AI requires a multifaceted approach that involves technological advancements, industry cooperation, public awareness, and responsible legislation. It's important to strike a balance between fostering innovation and ensuring ethical and accountable use of AI technologies. No Section 230 Immunity for AI Act tries to strike this balance, and misses pretty dramatically.
But that doesn't mean there isn't a path forward. While this particular approach won't work, the amount of potential harm AI can cause requires legislation to mitigate. We can't only rely on industry standard and cooperation. After all, when I ask someone if they are an AI or not, I want to have confidence in the response that I get, even if I accidentally offend someone.