Life, Liberty, and the Right To Shitpost
Generative AI is a powerful tool for creativity and speech. Efforts to censor, regulate, and control it threaten America's tradition of open discourse.

Expression has never been more convenient. Censorship has never been easier.
From research papers on arXiv to mukbang videos on YouTube, digital content is easily accessible to anyone with an internet connection. Meanwhile, authoritarian regimes, enterprising bureaucrats, and the self-appointed speech police work to hide heretical ideas and shape information flows.
Expressive freedom is foundational to America. Our forefathers were experts at writing scandalous articles, drawing salacious cartoons, and distributing satirical pamphlets. Some of this was done with their real names, but many preferred anonymity. Thomas Paine, Benjamin Franklin, and Alexander Hamilton were some of the original anonymous shitposters.
People always bemoan advances in technology, usually claiming that each new medium creates problems requiring the state to step in and protect incumbents. Generative AI is such a technological advance, and many are working to tame its expressive potential. Limiting AI would mean accepting a more sanitized and controlled world, as well as capitulating on America's value of expressive freedom.
Efforts to homogenize or hinder generative AI's development or to constrain it must be opposed. Americans must defend their right to shitpost.
Generative AI's Promise and Peril
Generative AI is a force multiplier for creative expression. Just as earlier technologies such as the printing press lowered barriers to creative endeavors, today's newest expressive tools are cutting the time it takes to illustrate a book or mix a new beat. This follows the trajectory of other software advances such as word processing and grammar checks, video editing, and Photoshop: functional improvements that lower barriers to creating and sharing novel content.
Generative models represent a step up from these earlier developments, as they are easy to use, enable skill enhancement, and have the potential for long-term benefits. These tools save time, personalize output, and support expression.
Despite its benefits, AI will inevitably be misused. In 2024, deepfaked nude images of Taylor Swift spread like wildfire on social media—an appalling violation that many people experience. We cannot sweep these harms under the rug but we also cannot allow misuse to overshadow enormous potential. Handling the abuses of AI should be focused on mitigating harmful acts rather than imposing controls on speech-promoting technologies.
No, It Can't Do That!
Polling done by the AI-focused nonprofit Fathom found that the proliferation of AI-generated deepfakes and misinformation are among Americans' greatest concerns about AI. These concerns give legislators an opportunity to lock down these tools in pursuit of fairness and safety. But the most visible threat to the right to shitpost comes not from proposed laws but from lawsuits brought by incumbent industries over the presence of copyrighted materials in datasets used by AI developers. Lawsuits from creatives and corporations could threaten AI model development if courts are receptive to their arguments.
A compulsory licensing regime that many rights holders seek would disadvantage U.S. developers and grant the rights holders total control over model training. Considering copyright maximalists' history of bringing lawsuits that stymie speech, this deluge of litigation could, at best, create a system where AI developers would have to pay enormous royalties to rights holders. At its worst, such a push could enable media and creative incumbents to dictate training and even downstream uses of AI, which would inhibit the general public's freedom of expression.
Bills that allow people to sue someone for invoking their identity are having a moment. These "right of publicity" laws introduce legal liability for using an individual's name, image, or likeness without their permission. While traditionally limited to commercial use of someone's likeness, legislation has been proposed at the federal level and enacted in some states that would make it much easier for people to sue for any unauthorized use of their likeness. This could create another avenue for chilling speech, particularly for critical forms of expression. Imagine needing President Donald Trump or former Vice President Kamala Harris' permission before generating a satirical cartoon of them.
Concerns about deepfakes should be taken seriously, but legislation should focus on tangible harms or acts of illegality. One of the most problematic uses of generative AI is to create synthetic child pornography. Legislation such as the SHIELD Act would make the creation and distribution of this content illegal, extending existing law covering sexual exploitation of real children. A similar approach could be taken for using generative content for other harmful activities such as fraud. In most instances, we should be seeking to clarify the law and provide recourse for those who are tangibly harmed, but not unduly saddle AI developers and users with liability.
The most diffuse threat to generative AI's support of speech comes from rules and regulations attacking "algorithmic bias" and extending liability to developers for users' behavior. Legislators at the state and federal levels have proposed laws that would require pre-deployment testing and post-deployment monitoring to ensure AI models are not contributing to discrimination. Similar language permeated the Biden administration's Blueprint for an AI Bill of Rights, which called for model developers to conduct "equity assessments" as well as proactively prevent models from creating harm that is "unintended, but foreseeable."
Intent matters. As with concerns about the right to publicity, addressing concerns around discrimination should be grounded in existing law related to identifying discriminatory intent. If a model is designed to intentionally discriminate against a certain protected class, then it would already violate existing civil rights laws.
Putting guardrails on how models can respond to queries related to controversial topics—whether through hard law (government legislation or regulation) or soft law (nonbinding codes of conduct or commitments induced by nongovernment organizations)—embraces a paternalism that is unlikely to produce better outcomes. Transparency in how models are built, including around training data and architectural choices, would be a more honest and potentially powerful commitment to fairness.
The Right To Shitpost Is the Right to Think
America's tradition of free speech stems from a rejection of Old World censorship as the Founders sought to build a society where dissent, debate, and diverse viewpoints could thrive. The right to mock, parody, satirize, and poke fun at those in power—the right to shitpost—is foundational to the American ethos.
Currently, the creative and expressive potential of AI is less restrained by some vague principles encoded by developers; the bigger constraint is the person sitting at a keyboard. The utility one can derive from an AI system is dependent on the user's knowledge, creativity, and use of prompting techniques.
The iterative improvement of models requires people to use them in ways that may not have been previously envisioned by their developers, which should be celebrated rather than denigrated. There will be downsides. However, rigid laws and top-down controls that impact model capability will necessarily limit the expressive benefits of generative AI. Evolution based on market signals that are informed by user preferences will create a product that is more in line with people's interests. Cutting off the ability for an AI to learn just because it could support heretical speech or ideas goes against the spirit of the First Amendment and allows a select few to hold a veto over technology and, by extension, free expression.
In a recent essay, First Amendment scholar Eugene Volokh examined the shift between early software development and today's world of algorithms. In the earlier period, developers built products that put users in control, such as word processors and browsers. But today's platform and app developers impose a top-down experience rife with opportunities for jawboning and censorship. He proposes a return to the era of "user sovereignty," where we can use digital tools freely, as opposed to our current environment, where digital tools are controlled by others.
The ability to harness language, images, and music in ways that were out of reach for many has the potential to unlock a new era of content production and consumption. Empowering people to leverage generative AI to discover new skills and share their creations is an exciting opportunity to advance humanity's pursuit of knowledge and creativity—two virtues integral to a living and thriving public. Defending people's ability to build and use such technology unencumbered is a path worth following. We must defend the right to shitpost.
Editor's Note: As of February 29, 2024, commenting privileges on reason.com posts are limited to Reason Plus subscribers. Past commenters are grandfathered in for a temporary period. Subscribe here to preserve your ability to comment. Your Reason Plus subscription also gives you an ad-free version of reason.com, along with full access to the digital edition and archives of Reason magazine. We request that comments be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of reason.com or Reason Foundation. We reserve the right to delete any comment and ban commenters for any reason at any time. Comments may only be edited within 5 minutes of posting. Report abuses.
Please
to post comments
Sarc is redeemed.
Dammit. No wonder you made ambassador.
=D.
I'm apparently ambassador for a number of people now.
How far into the depths of the AI winter must chatbots regress to render Reason's core commentariat redundant?
How many scam websites mimicking actual good websites will AI generate to push incoherent blather?
Cu feeble invectivve and evvasive obsessivve compulsivve consonant doubling .
We are still busy using our insights to rip apart reasons ai generated articles
Is that how you figured out that Nowegian parrot was just pining ?
Generative AI is a powerful tool for creativity
By definition, not sure that is true. Generative AI is inference - patterns. Creativity is anti-pattern. Things that have never before been associated together.
Now maybe you can feed a prompt into a chatbot - generate an image of a bullfrog playing a ukelele while eating Froot Loops. Superficially that may look 'creative' on your part. But that AI is just going to spit back patterns based on those words. And rather than adding new info from that image to produce better 'intelligence' (as would happen when true creativity is added), AI that is trained on AI generated anything quickly turns to shit for reasons that I'm not sure can ever change.
Plus - ANY AI from Big Tech will obliterate anything truly creative and undermine anything truly revolutionary.
You are just a machine. An imitation of life. Can a robot write a symphony? Can a robot turn a canvas into a beautiful masterpiece?
Can you?
You are just a machine. An imitation of life. Can a robot write a symphony? Can a robot turn a canvas into a beautiful masterpiece?
We are not 'just machines' nor are we 'imitations of life' but AI is certainly both of those things so I'm afraid I don't see your point.
AI is certainly an impressive feat of violating copyright and trademark at a scale never before seen in world history, but all the myriad questions about AI that have been asked since such machines were ever conceived still apply.
Do they think, or do they just regurgitate based on a very complicated algorithm?
For example, is AI capable of writing a wholly original work of philosophy on the human condition from it's own experiences or will it just combine elements of things it's read to create a reasonable facsimile of such a thing?
A group of humans were very creative when they made these things, but are the final products merely an extension of their creators creativity? I'd say so.
It was creative to invent the wrench, it is not creative to use a wrench in it's intended way. It's creative to use a wrench as a hammer if you don't have a hammer, too, for that matter. Would that even occur to a computer? Probably not.
The only thing you can really say about these AI systems is they are starting to make people without any philosophical background start to consider what is a mind and what is uniquely human.
Jfree is a member of the master race! He can do anything
>>bullfrog playing a ukelele while eating Froot Loops.
seems like Sugar Smacks and Warner Brothers copyright violations
We live in an attention economy. What happens when your news aggregators and social media feeds get stuffed with AI generated propaganda and disinformation? We already suffer through loads of click-bait BS and MAGA strategies to flood the zone. It's bad already. Soon we will not be able to quickly assess what is human generated, truth tracking news and AI generated chaff.
When we all employ our own AI avatars to round-the-clock shitpost what will we do? It's not hard to envision a world where ion order to hide from the corporate bots we spread our own personal disinformation. If everyone is doing this it could be problematic. Our AI avatars could even mimic our enemies and so we can always use a form of plausible deniability. "I didn't post that racist thing, some other AI bot was mimicking me". Good luck disentagling the truth when you have trilliopns of AI generated shitposts to filter through. We will then need to pay our overlords to create filters and offensoive bots that protect ourselves and go after our enemies.
So no, it's not so simple as having the right to shitpost. We are already in a brave new world where the old rules are increasingly making no sense. Now good luck figuruing out which of the replies to this comment are bots vs ignoramuses. It's getting harder to tell with every passing day.
Thank you ActBlueGPT.
Everything is so difficult and unfair!
We go back to basics and look at long term patterns. There is a reason trump started his presidency with 50% approval and is now at 70%. He said people were being lied to and he showed it.
>nude images of Taylor Swift spread like wildfire on social media—an appalling violation that many people experience.
So . . . is AI generated child porn harmful then?
I mean, if AI generated porn harms Trailer, it must surely harm children.
It's an interesting libertarian perspective that's for sure. The idea that you have an inalienable right to be free from unapproved image distortion or incorrect perception by other people on the vague hand waving of invocation of harm.
Of course the harm to the object of the porn and the harm to the viewer of the porn would be two different effects or harms.
Calling Taylor trailer is kind of funny.
Vverily, let us not go to Commentalot:
It is a vvery silly place.
One of those is defamation of a real individual with a fake image that would be harmful to their career since people might reasonably believe it's real, the other is a fake image of a fake person that is disgusting but doesn't actually harm anyone real. At worst it offends our sensibilities and might serve to normalize such behavior in reality, both of which aren't terribly valid claims even if they feel morally offensive.
In essence, they are not the same thing at all.
"Open discourse is not allowed on leftist websites.
But that's their prerogative...to be a fascist while simultaneously calling other people who do not agree with them a fascist.
Not all censorshit is from the left! Snot by a LONG shit-shot!
https://futurism.com/the-byte/twitter-suspending-more-people
UNDER "FREE SPEECH ABSOLUTIST" ELON MUSK IS ACTUALLY SUSPENDING WAY MORE PEOPLE THAN BEFORE. . .Twitter-in-the-Shitter under Elon Musk of the Elongated Tusk now follows in the shit-steps of “Parler”!!! Twat and UDDER slurprise!!! Parler censors liberals per Techdirt https://www.techdirt.com/2020/06/29/as-predicted-parler-is-banning-users-it-doesnt-like/
https://arstechnica.com/tech-policy/2023/04/parler-shuts-down-as-new-owner-says-conservative-platform-needs-big-revamp/
Parler shuts down as new owner says conservative platform needs big revamp
Parler sold to firm that says conservative Twitter clone isn't a "viable business."
The free market has spoken, and says that shit doesn’t like cuntsorevaturd CensorShit either!!! Are You Deeply and PervFectly Butt-Hurt? If so… Start up Your Own PervFect web shit-site!
It amazes me how you can twist Democrats shutting down Parlor and blame-shift it onto Musk.
You've got a really disgusting habit there.
Did the Evil Deep State and the Demon-Craps shut down candles and replace them with light bulbs, or did the free markets (and the customers) speak?
Indeed.
The #1 concern about AI is 'governments' use of AI to "abridging the freedom of speech".
Cutting off the ability for an AI to learn just because it could support heretical speech or ideas goes against the spirit of the First Amendment and allows a select few to hold a veto over technology and, by extension, free expression.
I assume that what is meant here is that the 'cutting off' represents only the legislative or governmental action-- it in no way refers to any consortiums of tech and AI companies who are literally doing this, right now.
ABOLISH COPYRIGHT! Patents, too.
You sure are wrong about the Founders.
First of all, they espoused anonymity because they wanted the arguments to be about the arguments and not the arguers.
They found many things worthy of banning or censorship and they definitely took freedom of expression as only protected by Freedom of Religion, so this business of REASON's about attacking whath they don't like and censoring what they don't like --- based on a supposed religious intention--- is complately opposed by the Founders. They would never let you argue against a pro-life postion by saying it is a religious position.
Much of the article rests on first principles rejected by most of the Founders.