Artificial Intelligence

Life, Liberty, and the Right To Shitpost

Generative AI is a powerful tool for creativity and speech. Efforts to censor, regulate, and control it threaten America's tradition of open discourse.

|

Expression has never been more convenient. Censorship has never been easier. 

From research papers on arXiv to mukbang videos on YouTube, digital content is easily accessible to anyone with an internet connection. Meanwhile, authoritarian regimes, enterprising bureaucrats, and the self-appointed speech police work to hide heretical ideas and shape information flows.

Expressive freedom is foundational to America. Our forefathers were experts at writing scandalous articles, drawing salacious cartoons, and distributing satirical pamphlets. Some of this was done with their real names, but many preferred anonymity. Thomas Paine, Benjamin Franklin, and Alexander Hamilton were some of the original anonymous shitposters. 

People always bemoan advances in technology, usually claiming that each new medium creates problems requiring the state to step in and protect incumbents. Generative AI is such a technological advance, and many are working to tame its expressive potential. Limiting AI would mean accepting a more sanitized and controlled world, as well as capitulating on America's value of expressive freedom. 

Efforts to homogenize or hinder generative AI's development or to constrain it must be opposed. Americans must defend their right to shitpost. 

Generative AI's Promise and Peril

Generative AI is a force multiplier for creative expression. Just as earlier technologies such as the printing press lowered barriers to creative endeavors, today's newest expressive tools are cutting the time it takes to illustrate a book or mix a new beat. This follows the trajectory of other software advances such as word processing and grammar checks, video editing, and Photoshop: functional improvements that lower barriers to creating and sharing novel content. 

Generative models represent a step up from these earlier developments, as they are easy to use, enable skill enhancement, and have the potential for long-term benefits. These tools save time, personalize output, and support expression.

Despite its benefits, AI will inevitably be misused. In 2024, deepfaked nude images of Taylor Swift spread like wildfire on social media—an appalling violation that many people experience. We cannot sweep these harms under the rug but we also cannot allow misuse to overshadow enormous potential. Handling the abuses of AI should be focused on mitigating harmful acts rather than imposing controls on speech-promoting technologies.

No, It Can't Do That!

Polling done by the AI-focused nonprofit Fathom found that the proliferation of AI-generated deepfakes and misinformation are among Americans' greatest concerns about AI. These concerns give legislators an opportunity to lock down these tools in pursuit of fairness and safety. But the most visible threat to the right to shitpost comes not from proposed laws but from lawsuits brought by incumbent industries over the presence of copyrighted materials in datasets used by AI developers. Lawsuits from creatives and corporations could threaten AI model development if courts are receptive to their arguments.

A compulsory licensing regime that many rights holders seek would disadvantage U.S. developers and grant the rights holders total control over model training. Considering copyright maximalists' history of bringing lawsuits that stymie speech, this deluge of litigation could, at best, create a system where AI developers would have to pay enormous royalties to rights holders. At its worst, such a push could enable media and creative incumbents to dictate training and even downstream uses of AI, which would inhibit the general public's freedom of expression.

Bills that allow people to sue someone for invoking their identity are having a moment. These "right of publicity" laws introduce legal liability for using an individual's name, image, or likeness without their permission. While traditionally limited to commercial use of someone's likeness, legislation has been proposed at the federal level and enacted in some states that would make it much easier for people to sue for any unauthorized use of their likeness. This could create another avenue for chilling speech, particularly for critical forms of expression. Imagine needing President Donald Trump or former Vice President Kamala Harris' permission before generating a satirical cartoon of them.

Concerns about deepfakes should be taken seriously, but legislation should focus on tangible harms or acts of illegality. One of the most problematic uses of generative AI is to create synthetic child pornography. Legislation such as the SHIELD Act would make the creation and distribution of this content illegal, extending existing law covering sexual exploitation of real children. A similar approach could be taken for using generative content for other harmful activities such as fraud. In most instances, we should be seeking to clarify the law and provide recourse for those who are tangibly harmed, but not unduly saddle AI developers and users with liability.

The most diffuse threat to generative AI's support of speech comes from rules and regulations attacking "algorithmic bias" and extending liability to developers for users' behavior. Legislators at the state and federal levels have proposed laws that would require pre-deployment testing and post-deployment monitoring to ensure AI models are not contributing to discrimination. Similar language permeated the Biden administration's Blueprint for an AI Bill of Rights, which called for model developers to conduct "equity assessments" as well as proactively prevent models from creating harm that is "unintended, but foreseeable."  

Intent matters. As with concerns about the right to publicity, addressing concerns around discrimination should be grounded in existing law related to identifying discriminatory intent. If a model is designed to intentionally discriminate against a certain protected class, then it would already violate existing civil rights laws. 

Putting guardrails on how models can respond to queries related to controversial topics—whether through hard law (government legislation or regulation) or soft law (nonbinding codes of conduct or commitments induced by nongovernment organizations)—embraces a paternalism that is unlikely to produce better outcomes. Transparency in how models are built, including around training data and architectural choices, would be a more honest and potentially powerful commitment to fairness. 

The Right To Shitpost Is the Right to Think

America's tradition of free speech stems from a rejection of Old World censorship as the Founders sought to build a society where dissent, debate, and diverse viewpoints could thrive. The right to mock, parody, satirize, and poke fun at those in power—the right to shitpost—is foundational to the American ethos. 

Currently, the creative and expressive potential of AI is less restrained by some vague principles encoded by developers; the bigger constraint is the person sitting at a keyboard. The utility one can derive from an AI system is dependent on the user's knowledge, creativity, and use of prompting techniques. 

The iterative improvement of models requires people to use them in ways that may not have been previously envisioned by their developers, which should be celebrated rather than denigrated. There will be downsides. However, rigid laws and top-down controls that impact model capability will necessarily limit the expressive benefits of generative AI. Evolution based on market signals that are informed by user preferences will create a product that is more in line with people's interests. Cutting off the ability for an AI to learn just because it could support heretical speech or ideas goes against the spirit of the First Amendment and allows a select few to hold a veto over technology and, by extension, free expression. 

In a recent essay, First Amendment scholar Eugene Volokh examined the shift between early software development and today's world of algorithms. In the earlier period, developers built products that put users in control, such as word processors and browsers. But today's platform and app developers impose a top-down experience rife with opportunities for jawboning and censorship. He proposes a return to the era of "user sovereignty," where we can use digital tools freely, as opposed to our current environment, where digital tools are controlled by others.

The ability to harness language, images, and music in ways that were out of reach for many has the potential to unlock a new era of content production and consumption. Empowering people to leverage generative AI to discover new skills and share their creations is an exciting opportunity to advance humanity's pursuit of knowledge and creativity—two virtues integral to a living and thriving public. Defending people's ability to build and use such technology unencumbered is a path worth following. We must defend the right to shitpost.